Git Product home page Git Product logo

citiususc / sparkbwa Goto Github PK

View Code? Open in Web Editor NEW
69.0 15.0 26.0 2.52 MB

SparkBWA is a new tool that exploits the capabilities of a Big Data technology as Apache Spark to boost the performance of one of the most widely adopted sequence aligner, the Burrows-Wheeler Aligner (BWA).

License: GNU General Public License v3.0

Makefile 0.60% C 77.17% C++ 1.66% Perl 1.13% JavaScript 5.21% Shell 0.52% Java 9.97% Roff 3.73%

sparkbwa's Introduction

What's SparkBWA about?

SparkBWA is a tool that integrates the Burrows-Wheeler Aligner--BWA on a Apache Spark framework running on the top of Hadoop. The current version of SparkBWA (v0.2, October 2016) supports the following BWA algorithms:

  • BWA-MEM
  • BWA-backtrack
  • BWA-SW

All of them work with single-reads and paired-end reads.

If you use SparkBWA, please cite this article:

José M. Abuin, Juan C. Pichel, Tomás F. Pena and Jorge Amigo. "SparkBWA: Speeding Up the Alignment of High-Throughput DNA Sequencing Data". PLoS ONE 11(5), pp. 1-21, 2016.

A version for Hadoop is available here.

Structure

Since version 0.2 the project keeps a standard Maven structure. The source code is in the src/main folder. Inside it, we can find two subfolders:

  • java - Here is where the Java code is stored.
  • native - Here the BWA native code (C) and the glue logic for JNI is stored.

Getting started

Requirements

Requirements to build SparkBWA are the same than the ones to build BWA, with the only exception that the JAVA_HOME environment variable should be defined. If not, you can define it in the /src/main/native/Makefile.common file.

It is also needed to include the flag -fPIC in the Makefile of the considered BWA version. To do this, the user just need to add this option to the end of the CFLAGS variable in the BWA Makefile. Considering bwa-0.7.15, the original Makefile contains:

CFLAGS=		-g -Wall -Wno-unused-function -O2

and after the change it should be:

CFLAGS=		-g -Wall -Wno-unused-function -O2 -fPIC

Additionaly, and as SparkBWA is built with Maven since version 0.2, also have it in the user computer is needed.

Building

The default way to build SparkBWA is:

git clone https://github.com/citiususc/SparkBWA.git
cd SparkBWA
mvn package

This will create the target folder, which will contain the jar file needed to run SparkBWA:

  • SparkBWA-0.2.jar - jar file to launch with Spark.

Configuring Spark

Since version 0.2 there is no need of configuring any Spark parameter. The only requirement is that the YARN containers need to have at least 10GB of memory available (for the human genome case).

Running SparkBWA

SparkBWA requires a working Hadoop cluster. Users should take into account that at least 10 GB of memory per map/YARN container are required (each map loads into memory the bwa index - refrence genome). Also, note that SparkBWA uses disk space in the /tmp directory or in the configured Hadoop or Spark temporary folder.

Here it is an example of how to execute SparkBWA using the BWA-MEM algorithm with paired-end reads. The example assumes that our index is stored in all the cluster nodes at /Data/HumanBase/ . The index can be obtained from BWA using "bwa index".

First, we get the input FASTQ reads from the 1000 Genomes Project ftp:

wget ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/phase3/data/NA12750/sequence_read/ERR000589_1.filt.fastq.gz
wget ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/phase3/data/NA12750/sequence_read/ERR000589_2.filt.fastq.gz

Next, the downloaded files should be uncompressed:

gzip -d ERR000589_1.filt.fastq.gz
gzip -d ERR000589_2.filt.fastq.gz

and uploaded to HDFS:

hdfs dfs -copyFromLocal ERR000589_1.filt.fastq ERR000589_1.filt.fastq
hdfs dfs -copyFromLocal ERR000589_2.filt.fastq ERR000589_2.filt.fastq

Finally, we can execute SparkBWA on the cluster. Again, we assume that Spark is stored at spark_dir:

spark_dir/bin/spark-submit --class com.github.sparkbwa.SparkBWA --master yarn-cluster
--driver-memory 1500m --executor-memory 10g --executor-cores 1 --verbose
--num-executors 32 SparkBWA-0.2.jar -m -r -p --index /Data/HumanBase/hg38 -n 32 
-w "-R @RG\tID:foo\tLB:bar\tPL:illumina\tPU:illumina\tSM:ERR000589"
ERR000589_1.filt.fastq ERR000589_2.filt.fastq Output_ERR000589

Options used:

  • -m - Sequence alignment algorithm.
  • -p - Use paired-end reads.
  • -w "args" - Can be used to pass arguments directly to BWA (ex. "-t 4" to specify the amount of threads to use per instance of BWA).
  • --index index_prefix - Index prefix is specified. The index must be available in all the cluster nodes at the same location.
  • The last three arguments are the input and output HDFS files.

After the execution, in order to move the output to the local filesystem use:

hdfs dfs -copyToLocal Output_ERR000589/* ./

In case of not using a reducer, the output will be split into several pieces (files). If we want to put it together we can use "samtools merge".

If you want to check all the available options, execute the command:

spark_dir/bin/spark-submit --class com.github.sparkbwa.SparkBWA SparkBWA-0.2.jar -h

The result is:

SparkBWA performs genomic alignment using bwa in a Hadoop/YARN cluster
 usage: spark-submit --class com.github.sparkbwa.SparkBWA SparkBWA-0.2.jar
       [-a | -b | -m]  [-f | -k] [-h] [-i <Index prefix>]   [-n <Number of
       partitions>] [-p | -s] [-r]  [-w <"BWA arguments">]
       <FASTQ file 1> [FASTQ file 2] <SAM file output>
Help options: 
  -h, --help                                       Shows this help

Input FASTQ reads options: 
  -p, --paired                                     Paired reads will be used as input FASTQ reads
  -s, --single                                     Single reads will be used as input FASTQ reads

Sorting options: 
  -f, --hdfs                                       The HDFS is used to perform the input FASTQ reads sort
  -k, --spark                                      the Spark engine is used to perform the input FASTQ reads sort

BWA algorithm options: 
  -a, --aln                                        The ALN algorithm will be used
  -b, --bwasw                                      The bwasw algorithm will be used
  -m, --mem                                        The MEM algorithm will be used

Index options: 
  -i, --index <Index prefix>                       Prefix for the index created by bwa to use - setIndexPath(string)

Spark options: 
  -n, --partitions <Number of partitions>          Number of partitions to divide input - setPartitionNumber(int)

Reducer options: 
  -r, --reducer                                    The program is going to merge all the final results in a reducer phase

BWA arguments options: 
  -w, --bwa <"BWA arguments">                      Arguments passed directly to BWA

Accuracy

SparkBWA should be as accurate as running BWA normally. Below are GCAT alignment benchmarks which proves this.

MEM

BWA-backtrack

BWA-SW

Frequently asked questions (FAQs)

  1. I can not build the tool because jni_md.h or jni.h is missing.

1. I can not build the tool because jni_md.h or jni.h is missing.

You need to set correctly your JAVA_HOME environment variable or you can set it in Makefile.common.

sparkbwa's People

Contributors

adrianrodriguezvilas avatar jcpichel avatar jmabuin avatar paalka avatar xubo245 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sparkbwa's Issues

Single-end reads crash

hi,
I hit the same error "Can not create a Path from an empty string " every time I try single-end reads. No matter if I use Spark standalone, or with Yarn. I have check the data and disk integrity - no problems there. Paired reads work. Here is the full log from the error.
Best regards.

17/08/01 12:45:28 INFO scheduler.DAGScheduler: ResultStage 0 (zipWithIndex at BwaInterpreter.java:152) finished in 281.332 s
17/08/01 12:45:28 INFO cluster.YarnClusterScheduler: Removed TaskSet 0.0, whose tasks have all completed, from pool
17/08/01 12:45:28 INFO scheduler.DAGScheduler: Job 0 finished: zipWithIndex at BwaInterpreter.java:152, took 281.489673 s
17/08/01 12:45:28 INFO storage.MemoryStore: ensureFreeSpace(302416) called with curMem=327400, maxMem=20615905935
17/08/01 12:45:28 INFO storage.MemoryStore: Block broadcast_2 stored as values in memory (estimated size 295.3 KB, free 19.2 GB)
17/08/01 12:45:28 INFO storage.MemoryStore: ensureFreeSpace(20252) called with curMem=629816, maxMem=20615905935
17/08/01 12:45:28 INFO storage.MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 19.8 KB, free 19.2 GB)
17/08/01 12:45:28 INFO storage.BlockManagerInfo: Added broadcast_2_piece0 in memory on 172.23.9.74:34877 (size: 19.8 KB, free: 19.2 GB)
17/08/01 12:45:28 INFO spark.SparkContext: Created broadcast 2 from textFile at BwaInterpreter.java:149
17/08/01 12:45:28 ERROR yarn.ApplicationMaster: User class threw exception: java.lang.IllegalArgumentException: Can not create a Path from an empty string
java.lang.IllegalArgumentException: Can not create a Path from an empty string
at org.apache.hadoop.fs.Path.checkPathArg(Path.java:127)
at org.apache.hadoop.fs.Path.(Path.java:135)
at org.apache.hadoop.util.StringUtils.stringToPath(StringUtils.java:244)
at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:409)
at org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$32.apply(SparkContext.scala:1016)
at org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$32.apply(SparkContext.scala:1016)
at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176)
at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$6.apply(HadoopRDD.scala:176)
at scala.Option.map(Option.scala:145)
at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:176)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:200)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.rdd.ZippedWithIndexRDD.(ZippedWithIndexRDD.scala:44)
at org.apache.spark.rdd.RDD$$anonfun$zipWithIndex$1.apply(RDD.scala:1246)
at org.apache.spark.rdd.RDD$$anonfun$zipWithIndex$1.apply(RDD.scala:1246)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)
at org.apache.spark.rdd.RDD.zipWithIndex(RDD.scala:1245)
at org.apache.spark.api.java.JavaRDDLike$class.zipWithIndex(JavaRDDLike.scala:321)
at org.apache.spark.api.java.AbstractJavaRDDLike.zipWithIndex(JavaRDDLike.scala:47)
at com.github.sparkbwa.BwaInterpreter.loadFastq(BwaInterpreter.java:152)
at com.github.sparkbwa.BwaInterpreter.handlePairedReadsSorting(BwaInterpreter.java:239)
at com.github.sparkbwa.BwaInterpreter.runBwa(BwaInterpreter.java:333)
at com.github.sparkbwa.SparkBWA.main(SparkBWA.java:37)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:525)
17/08/01 12:45:28 INFO yarn.ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: java.lang.IllegalArgumentException: Can not create a Path from an empty string)
17/08/01 12:45:28 INFO spark.SparkContext: Invoking stop() from shutdown hook
17/08/01 12:45:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/metrics/json,null}
17/08/01 12:45:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
17/08/01 12:45:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/api,null}
17/08/01 12:45:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/,null}
17/08/01 12:45:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/static,null}
17/08/01 12:45:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}
17/08/01 12:45:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump,null}
17/08/01 12:45:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/json,null}
17/08/01 12:45:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors,null}
17/08/01 12:45:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment/json,null}
17/08/01 12:45:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment,null}
17/08/01 12:45:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd/json,null}
17/08/01 12:45:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd,null}
17/08/01 12:45:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/json,null}
17/08/01 12:45:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage,null}
17/08/01 12:45:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null}
17/08/01 12:45:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool,null}
17/08/01 12:45:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null}
17/08/01 12:45:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage,null}
17/08/01 12:45:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/json,null}
17/08/01 12:45:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages,null}
17/08/01 12:45:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null}
17/08/01 12:45:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job,null}
17/08/01 12:45:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/json,null}
17/08/01 12:45:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs,null}
17/08/01 12:45:28 INFO ui.SparkUI: Stopped Spark web UI at http://172.23.9.74:41490
17/08/01 12:45:28 INFO scheduler.DAGScheduler: Stopping DAGScheduler
17/08/01 12:45:28 INFO cluster.YarnClusterSchedulerBackend: Shutting down all executors
17/08/01 12:45:28 INFO cluster.YarnClusterSchedulerBackend: Asking each executor to shut down
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r2i0n6.thinking.leuven.vsc:34925
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r2i0n14.thinking.leuven.vsc:43852
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r4i0n2.thinking.leuven.vsc:45454
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r2i0n8.thinking.leuven.vsc:38231
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r3i0n15.thinking.leuven.vsc:44938
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r2i0n9.thinking.leuven.vsc:46302
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r2i0n4.thinking.leuven.vsc:44478
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r2i0n11.thinking.leuven.vsc:35098
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r2i0n14.thinking.leuven.vsc:37700
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r2i0n4.thinking.leuven.vsc:35676
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r2i0n6.thinking.leuven.vsc:37204
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r3i0n14.thinking.leuven.vsc:34888
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r2i0n11.thinking.leuven.vsc:44106
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r3i0n14.thinking.leuven.vsc:43985
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r2i0n3.thinking.leuven.vsc:46533
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r3i0n15.thinking.leuven.vsc:39727
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r2i0n9.thinking.leuven.vsc:33322
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r2i0n5.thinking.leuven.vsc:45842
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r2i0n15.thinking.leuven.vsc:37554
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r4i0n3.thinking.leuven.vsc:42387
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r4i0n4.thinking.leuven.vsc:43140
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r2i0n12.thinking.leuven.vsc:33092
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r4i0n5.thinking.leuven.vsc:37231
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r2i0n10.thinking.leuven.vsc:41635
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r4i0n3.thinking.leuven.vsc:35557
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r2i0n10.thinking.leuven.vsc:34186
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r2i2n10.thinking.leuven.vsc:34124
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r2i0n2.thinking.leuven.vsc:38892
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r2i0n5.thinking.leuven.vsc:44296
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r4i0n5.thinking.leuven.vsc:39207
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r2i0n13.thinking.leuven.vsc:38967
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r2i0n13.thinking.leuven.vsc:35399
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r2i0n15.thinking.leuven.vsc:32933
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r2i2n10.thinking.leuven.vsc:44822
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r2i0n2.thinking.leuven.vsc:42594
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r2i0n3.thinking.leuven.vsc:34750
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r2i0n12.thinking.leuven.vsc:33609
17/08/01 12:45:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r4i0n4.thinking.leuven.vsc:43560
17/08/01 12:45:29 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r4i0n2.thinking.leuven.vsc:43352
17/08/01 12:45:29 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r2i0n1.thinking.leuven.vsc:37295
17/08/01 12:45:29 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r2i0n1.thinking.leuven.vsc:42513
17/08/01 12:45:29 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r2i0n8.thinking.leuven.vsc:35885
17/08/01 12:45:29 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r1i1n6.thinking.leuven.vsc:40815
17/08/01 12:45:29 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r2i2n11.thinking.leuven.vsc:36843
17/08/01 12:45:29 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r2i2n11.thinking.leuven.vsc:36079
17/08/01 12:45:29 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r2i0n16.thinking.leuven.vsc:34914
17/08/01 12:45:29 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r2i0n16.thinking.leuven.vsc:37117
17/08/01 12:45:29 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
17/08/01 12:45:29 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r1i1n6.thinking.leuven.vsc:38277
17/08/01 12:45:29 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r2i0n7.thinking.leuven.vsc:39113
17/08/01 12:45:29 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. r2i0n7.thinking.leuven.vsc:34719
17/08/01 12:45:29 INFO storage.MemoryStore: MemoryStore cleared
17/08/01 12:45:29 INFO storage.BlockManager: BlockManager stopped
17/08/01 12:45:29 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
17/08/01 12:45:29 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
17/08/01 12:45:29 INFO spark.SparkContext: Successfully stopped SparkContext
17/08/01 12:45:29 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with FAILED (diag message: User class threw exception: java.lang.IllegalArgumentException: Can not create a Path from an empty string)
17/08/01 12:45:29 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
17/08/01 12:45:29 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
17/08/01 12:45:29 INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered.
17/08/01 12:45:29 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
17/08/01 12:45:29 INFO yarn.ApplicationMaster: Deleting staging directory .sparkStaging/application_1501582874205_0002
17/08/01 12:45:29 ERROR yarn.ApplicationMaster: Failed to cleanup staging dir .sparkStaging/application_1501582874205_0002
java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:795)
at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1927)
at org.apache.hadoop.hdfs.DistributedFileSystem$12.doCall(DistributedFileSystem.java:638)
at org.apache.hadoop.hdfs.DistributedFileSystem$12.doCall(DistributedFileSystem.java:634)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:634)
at org.apache.spark.deploy.yarn.ApplicationMaster.org$apache$spark$deploy$yarn$ApplicationMaster$$cleanupStagingDir(ApplicationMaster.scala:413)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$run$1.apply$mcV$sp(ApplicationMaster.scala:135)
at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:264)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:234)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:234)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:234)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1699)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:234)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:234)
at org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:234)
at scala.util.Try$.apply(Try.scala:161)
at org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:234)
at org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:216)
at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
17/08/01 12:45:29 INFO util.ShutdownHookManager: Shutdown hook called
17/08/01 12:45:29 INFO util.ShutdownHookManager: Deleting directory /ddn1/vol1/site_scratch/leuven/304/vsc30484/mapred_scratch/usercache/vsc30484/appcache/application_1501582874205_0002/spark-6a89e3e0-6f21-404d-92d4-53617616daa3

exception in thread main org.apache.spark.sparkexception application finished with failed status

What is the reason of this exception ?!

cancerdetector@cluster-cancerdetector-m:~/SparkBWA/build$ spark-submit --class SparkBWA --master yarn-cluster --
conf spark.yarn.jar=hdfs:///user/spark/spark-assembly.jar --driver-memory 1500m --executor-memory 1500m --executor-cores 1 --archives ./bwa.zip --verbose ./SparkBWA.jar -algorithm mem -reads paired -index /Data/HumanBase/hg38 -partitions 32 ERR000589_1.filt.fastq ERR000589_2.filt.fastq Output_ERR000589
Using properties file: /usr/lib/spark/conf/spark-defaults.conf
Adding default property: spark.executor.extraJavaOptions=-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
Adding default property: spark.history.fs.logDirectory=hdfs://cluster-cancerdetector-m/user/spark/eventlog
Adding default property: spark.eventLog.enabled=true
Adding default property: spark.driver.maxResultSize=1920m
Adding default property: spark.shuffle.service.enabled=true
Adding default property: spark.yarn.historyServer.address=cluster-cancerdetector-m:18080
Adding default property: spark.sql.parquet.cacheMetadata=false
Adding default property: spark.driver.memory=3840m
Adding default property: spark.dynamicAllocation.maxExecutors=10000
Adding default property: spark.scheduler.minRegisteredResourcesRatio=0.0
Adding default property: spark.yarn.am.memoryOverhead=558
Adding default property: spark.yarn.am.memory=5586m
Adding default property: spark.driver.extraJavaOptions=-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
Adding default property: spark.master=yarn-cluster
Adding default property: spark.executor.memory=5586m
Adding default property: spark.eventLog.dir=hdfs://cluster-cancerdetector-m/user/spark/eventlog
Adding default property: spark.dynamicAllocation.enabled=true
Adding default property: spark.executor.cores=2
Adding default property: spark.yarn.executor.memoryOverhead=558
Adding default property: spark.dynamicAllocation.minExecutors=1
Adding default property: spark.dynamicAllocation.initialExecutors=10000
Adding default property: spark.akka.frameSize=512
Parsed arguments:
master yarn-cluster
deployMode null
executorMemory 1500m
executorCores 1
totalExecutorCores null
propertiesFile /usr/lib/spark/conf/spark-defaults.conf
driverMemory 1500m
driverCores null
driverExtraClassPath null
driverExtraLibraryPath null
driverExtraJavaOptions -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
supervise false
queue null
numExecutors null
files null
pyFiles null
archives file:/home/cancerdetector/SparkBWA/build/./bwa.zip
mainClass SparkBWA
primaryResource file:/home/cancerdetector/SparkBWA/build/./SparkBWA.jar
name SparkBWA
childArgs [-algorithm mem -reads paired -index /Data/HumanBase/hg38 -partitions 32 ERR000589_1.filt.fastq ERR000589_2.filt.fastq Output_ERR000589]
jars null
packages null
packagesExclusions null
repositories null
verbose true

Spark properties used, including those specified through
--conf and those from the properties file /usr/lib/spark/conf/spark-defaults.conf:
spark.yarn.am.memoryOverhead -> 558
spark.driver.memory -> 1500m
spark.yarn.jar -> hdfs:///user/spark/spark-assembly.jar
spark.executor.memory -> 5586m
spark.yarn.historyServer.address -> cluster-cancerdetector-m:18080
spark.eventLog.enabled -> true
spark.scheduler.minRegisteredResourcesRatio -> 0.0
spark.dynamicAllocation.maxExecutors -> 10000
spark.akka.frameSize -> 512
spark.executor.extraJavaOptions -> -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
spark.sql.parquet.cacheMetadata -> false
spark.shuffle.service.enabled -> true
spark.history.fs.logDirectory -> hdfs://cluster-cancerdetector-m/user/spark/eventlog
spark.dynamicAllocation.initialExecutors -> 10000
spark.dynamicAllocation.minExecutors -> 1
spark.yarn.executor.memoryOverhead -> 558
spark.driver.extraJavaOptions -> -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
spark.eventLog.dir -> hdfs://cluster-cancerdetector-m/user/spark/eventlog
spark.yarn.am.memory -> 5586m
spark.driver.maxResultSize -> 1920m
spark.master -> yarn-cluster
spark.dynamicAllocation.enabled -> true
spark.executor.cores -> 2

Main class:
org.apache.spark.deploy.yarn.Client
Arguments:
--name
SparkBWA
--driver-memory
1500m
--executor-memory
1500m
--executor-cores
1
--archives
file:/home/cancerdetector/SparkBWA/build/./bwa.zip
--jar
file:/home/cancerdetector/SparkBWA/build/./SparkBWA.jar
--class
SparkBWA
--arg
-algorithm
--arg
mem
--arg
-reads
--arg
paired
--arg
-index
--arg
/Data/HumanBase/hg38
--arg
-partitions
--arg
32
--arg
ERR000589_1.filt.fastq
--arg
ERR000589_2.filt.fastq
--arg
Output_ERR000589
System properties:
spark.yarn.am.memoryOverhead -> 558
spark.driver.memory -> 1500m
spark.yarn.jar -> hdfs:///user/spark/spark-assembly.jar
spark.executor.memory -> 1500m
spark.yarn.historyServer.address -> cluster-cancerdetector-m:18080
spark.eventLog.enabled -> true
spark.scheduler.minRegisteredResourcesRatio -> 0.0
SPARK_SUBMIT -> true
spark.dynamicAllocation.maxExecutors -> 10000
spark.akka.frameSize -> 512
spark.sql.parquet.cacheMetadata -> false
-reads
spark.executor.extraJavaOptions -> -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
spark.app.name -> SparkBWA
spark.shuffle.service.enabled -> true
spark.history.fs.logDirectory -> hdfs://cluster-cancerdetector-m/user/spark/eventlog
spark.dynamicAllocation.initialExecutors -> 10000
spark.dynamicAllocation.minExecutors -> 1
spark.yarn.executor.memoryOverhead -> 558
spark.driver.extraJavaOptions -> -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
spark.submit.deployMode -> cluster
spark.eventLog.dir -> hdfs://cluster-cancerdetector-m/user/spark/eventlog
spark.yarn.am.memory -> 5586m
spark.driver.maxResultSize -> 1920m
spark.master -> yarn-cluster
spark.dynamicAllocation.enabled -> true
spark.executor.cores -> 1
Classpath elements:
spark.yarn.am.memory is set but does not apply in cluster mode.
spark.yarn.am.memoryOverhead is set but does not apply in cluster mode.
16/07/22 16:21:11 INFO org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at cluster-cancerdet
ector-m/10.132.0.2:8032
16/07/22 16:21:12 INFO org.apache.hadoop.yarn.client.api.impl.YarnClientImpl: Submitted application application_
1467990031555_0089
Exception in thread "main" org.apache.spark.SparkException: Application application_1467990031555_0089 finished
with failed status
at org.apache.spark.deploy.yarn.Client.run(Client.scala:1034)
at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1081)
at org.apache.spark.deploy.yarn.Client.main(Client.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:7
31)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)`

Exception in thread “main” org.apache.spark.SparkException: Application application finished with failed status

What is the reason of this exception ?!

cancerdetector@cluster-cancerdetector-m:~/SparkBWA/build$ spark-submit --class SparkBWA --master yarn-cluster --deploy-mode cluster --conf spark.yarn.jar=hdfs:///user/spark/spark-assembly.jar --driver-memory 1500m --executor-memory 1500m --executor-cores 1 --archives ./bwa.zip --verbose ./SparkBWA.jar -algorithm mem -reads paired -index /Data/HumanBase/hg38 -partitions 32 ERR000589_1.filt.fastq ERR000589_2.filt.fastqhb Output_ERR000589
Using properties file: /usr/lib/spark/conf/spark-defaults.conf
Adding default property: spark.executor.extraJavaOptions=-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
Adding default property: spark.history.fs.logDirectory=hdfs://cluster-cancerdetector-m/user/spark/eventlog
Adding default property: spark.eventLog.enabled=true
Adding default property: spark.driver.maxResultSize=1920m
Adding default property: spark.shuffle.service.enabled=true
Adding default property: spark.yarn.historyServer.address=cluster-cancerdetector-m:18080
Adding default property: spark.sql.parquet.cacheMetadata=false
Adding default property: spark.driver.memory=3840m
Adding default property: spark.dynamicAllocation.maxExecutors=10000
Adding default property: spark.scheduler.minRegisteredResourcesRatio=0.0
Adding default property: spark.yarn.am.memoryOverhead=558
Adding default property: spark.yarn.am.memory=5586m
Adding default property: spark.driver.extraJavaOptions=-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
Adding default property: spark.master=yarn-client
Adding default property: spark.executor.memory=5586m
Adding default property: spark.eventLog.dir=hdfs://cluster-cancerdetector-m/user/spark/eventlog
Adding default property: spark.dynamicAllocation.enabled=true
Adding default property: spark.executor.cores=2
Adding default property: spark.yarn.executor.memoryOverhead=558
Adding default property: spark.dynamicAllocation.minExecutors=1
Adding default property: spark.dynamicAllocation.initialExecutors=10000
Adding default property: spark.akka.frameSize=512
Parsed arguments:
  master                  yarn-cluster
  deployMode              cluster
  executorMemory          1500m
  executorCores           1
  totalExecutorCores      null
  propertiesFile          /usr/lib/spark/conf/spark-defaults.conf
  driverMemory            1500m
  driverCores             null
  driverExtraClassPath    null
  driverExtraLibraryPath  null
  driverExtraJavaOptions  -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
  supervise               false
  queue                   null
  numExecutors            null
  files                   null
  pyFiles                 null
  archives                file:/home/cancerdetector/SparkBWA/build/./bwa.zip
  mainClass               SparkBWA
  primaryResource         file:/home/cancerdetector/SparkBWA/build/./SparkBWA.jar
  name                    SparkBWA
  childArgs               [-algorithm mem -reads paired -index /Data/HumanBase/hg38 -partitions 32 ERR000589_1.filt.fastq ERR000589_2.filt.fastqhb Output_ERR000589]
  jars                    null
  packages                null
  packagesExclusions      null
  repositories            null
  verbose                 true

Spark properties used, including those specified through
 --conf and those from the properties file /usr/lib/spark/conf/spark-defaults.conf:
  spark.yarn.am.memoryOverhead -> 558
  spark.driver.memory -> 1500m
  spark.yarn.jar -> hdfs:///user/spark/spark-assembly.jar
  spark.executor.memory -> 5586m
  spark.yarn.historyServer.address -> cluster-cancerdetector-m:18080
  spark.eventLog.enabled -> true
  spark.scheduler.minRegisteredResourcesRatio -> 0.0
  spark.dynamicAllocation.maxExecutors -> 10000
  spark.akka.frameSize -> 512
  spark.executor.extraJavaOptions -> -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
  spark.sql.parquet.cacheMetadata -> false
  spark.shuffle.service.enabled -> true
  spark.history.fs.logDirectory -> hdfs://cluster-cancerdetector-m/user/spark/eventlog
  spark.dynamicAllocation.initialExecutors -> 10000
  spark.dynamicAllocation.minExecutors -> 1
  spark.yarn.executor.memoryOverhead -> 558
  spark.driver.extraJavaOptions -> -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
  spark.eventLog.dir -> hdfs://cluster-cancerdetector-m/user/spark/eventlog
  spark.yarn.am.memory -> 5586m
  spark.driver.maxResultSize -> 1920m
  spark.master -> yarn-client
  spark.dynamicAllocation.enabled -> true
  spark.executor.cores -> 2


Main class:
org.apache.spark.deploy.yarn.Client
Arguments:
--name
SparkBWA
--driver-memory
1500m
--executor-memory
1500m
--executor-cores
1
--archives
file:/home/cancerdetector/SparkBWA/build/./bwa.zip
--jar
file:/home/cancerdetector/SparkBWA/build/./SparkBWA.jar
--class
SparkBWA
--arg
-algorithm
--arg
mem
--arg
-reads
--arg
paired
--arg
-index
--arg
/Data/HumanBase/hg38
--arg
-partitions
--arg
32
--arg
ERR000589_1.filt.fastq
--arg
ERR000589_2.filt.fastqhb
--arg
Output_ERR000589
System properties:
spark.yarn.am.memoryOverhead -> 558
spark.driver.memory -> 1500m
spark.yarn.jar -> hdfs:///user/spark/spark-assembly.jar
spark.executor.memory -> 1500m
spark.yarn.historyServer.address -> cluster-cancerdetector-m:18080
spark.eventLog.enabled -> true
spark.scheduler.minRegisteredResourcesRatio -> 0.0
SPARK_SUBMIT -> true
spark.dynamicAllocation.maxExecutors -> 10000
spark.akka.frameSize -> 512
spark.sql.parquet.cacheMetadata -> false
spark.executor.extraJavaOptions -> -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
spark.app.name -> SparkBWA
spark.shuffle.service.enabled -> true
spark.history.fs.logDirectory -> hdfs://cluster-cancerdetector-m/user/spark/eventlog
spark.dynamicAllocation.initialExecutors -> 10000
spark.dynamicAllocation.minExecutors -> 1
spark.yarn.executor.memoryOverhead -> 558
spark.driver.extraJavaOptions -> -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
spark.submit.deployMode -> cluster
spark.eventLog.dir -> hdfs://cluster-cancerdetector-m/user/spark/eventlog
spark.yarn.am.memory -> 5586m
spark.driver.maxResultSize -> 1920m
spark.master -> yarn-cluster
spark.dynamicAllocation.enabled -> true
spark.executor.cores -> 1
Classpath elements:
spark.yarn.am.memory is set but does not apply in cluster mode.
spark.yarn.am.memoryOverhead is set but does not apply in cluster mode.
16/07/31 01:12:39 INFO org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at cluster-cancerdet
ector-m/10.132.0.2:8032
16/07/31 01:12:40 INFO org.apache.hadoop.yarn.client.api.impl.YarnClientImpl: Submitted application application_
1467990031555_0106
Exception in thread "main" org.apache.spark.SparkException: Application application_1467990031555_0106 finished 
with failed status
        at org.apache.spark.deploy.yarn.Client.run(Client.scala:1034)
        at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1081)
        at org.apache.spark.deploy.yarn.Client.main(Client.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:7
31)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

When I tried to check the AM and executor logs. the command didn't work (I have set the yarn.log-aggregation-enable to true), So I tried to manually access into NM's log dir to see the detailed application logs. Here are the application logs from the NM's log file:

`2016-07-31 01:12:40,387 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.4:50010 is added to blk_1073742335_1511{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 0
2016-07-31 01:12:40,387 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.3:50010 is added to blk_1073742335_1511{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 0
2016-07-31 01:12:40,391 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/cancerdetector/.sparkStaging/application_1467990031555_0106/SparkBWA.jar is closed by DFSClient_NONMAPREDUCE_-762268348_1
2016-07-31 01:12:40,419 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073742336_1512{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} for /user/cancerdetector/.sparkStaging/application_1467990031555_0106/bwa.zip
2016-07-31 01:12:40,445 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.4:50010 is added to blk_1073742336_1512{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 0
2016-07-31 01:12:40,446 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.3:50010 is added to blk_1073742336_1512{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 0
2016-07-31 01:12:40,448 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/cancerdetector/.sparkStaging/application_1467990031555_0106/bwa.zip is closed by DFSClient_NONMAPREDUCE_-762268348_1
2016-07-31 01:12:40,495 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073742337_1513{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} for /user/cancerdetector/.sparkStaging/application_1467990031555_0106/__spark_conf__2552000168715758347.zip
2016-07-31 01:12:40,506 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.4:50010 is added to blk_1073742337_1513{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 0
2016-07-31 01:12:40,506 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.3:50010 is added to blk_1073742337_1513{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 0
2016-07-31 01:12:40,509 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/cancerdetector/.sparkStaging/application_1467990031555_0106/__spark_conf__2552000168715758347.zip is closed by DFSClient_NONMAPREDUCE_-762268348_1
2016-07-31 01:12:44,720 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073742338_1514{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} for /user/spark/eventlog/application_1467990031555_0106_1.inprogress
2016-07-31 01:12:44,877 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* fsync: /user/spark/eventlog/application_1467990031555_0106_1.inprogress for DFSClient_NONMAPREDUCE_-1111833453_14
2016-07-31 01:12:45,373 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.4:50010 is added to blk_1073742338_1514{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 231
2016-07-31 01:12:45,375 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.3:50010 is added to blk_1073742338_1514{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 231
2016-07-31 01:12:45,379 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/application_1467990031555_0106_1.inprogress is closed by DFSClient_NONMAPREDUCE_-1111833453_14
2016-07-31 01:12:45,843 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.b7989393-f278-477c-8e83-ff5da9079e8a is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:12:49,914 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073742339_1515{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} for /user/spark/eventlog/application_1467990031555_0106_2.inprogress
2016-07-31 01:12:50,100 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* fsync: /user/spark/eventlog/application_1467990031555_0106_2.inprogress for DFSClient_NONMAPREDUCE_378341726_14
2016-07-31 01:12:50,737 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.4:50010 is added to blk_1073742339_1515{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 231
2016-07-31 01:12:50,738 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 10.132.0.3:50010 is added to blk_1073742339_1515{UCState=UNDER_CONSTRUCTION, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW], ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]} size 231
2016-07-31 01:12:50,742 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/application_1467990031555_0106_2.inprogress is closed by DFSClient_NONMAPREDUCE_378341726_14
2016-07-31 01:12:50,892 INFO BlockStateChange: BLOCK* addToInvalidates: blk_1073742335_1511 10.132.0.3:50010 10.132.0.4:50010 
2016-07-31 01:12:50,892 INFO BlockStateChange: BLOCK* addToInvalidates: blk_1073742337_1513 10.132.0.3:50010 10.132.0.4:50010 
2016-07-31 01:12:50,892 INFO BlockStateChange: BLOCK* addToInvalidates: blk_1073742336_1512 10.132.0.3:50010 10.132.0.4:50010 
2016-07-31 01:12:51,804 INFO BlockStateChange: BLOCK* BlockManager: ask 10.132.0.3:50010 to delete [blk_1073742336_1512, blk_1073742337_1513, blk_1073742335_1511]
2016-07-31 01:12:54,804 INFO BlockStateChange: BLOCK* BlockManager: ask 10.132.0.4:50010 to delete [blk_1073742336_1512, blk_1073742337_1513, blk_1073742335_1511]
2016-07-31 01:12:55,868 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.46380a1f-b5fd-4924-96aa-f59dcae0cbec is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:13:05,882 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 244 Total time for transactions(ms): 5 Number of transactions batched in Syncs: 0 Number of syncs: 234 SyncTimes(ms): 221 
2016-07-31 01:13:05,885 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.7273ee28-eb1c-4fe2-98d2-c5a20ebe4ffa is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:13:15,892 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.0f640743-d06c-4583-ac95-9d520dc8f301 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:13:25,902 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.bc63864c-0267-47b5-bcc1-96ba81d6c9a5 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:13:35,910 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.93557793-2ba2-47e8-b54c-234c861b6e6c is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:13:45,918 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.0fdf083c-3c53-4051-af16-d579f700962e is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:13:55,927 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.834632f1-d9c6-4e14-9354-72f8c18f66d0 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:14:05,933 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 262 Total time for transactions(ms): 5 Number of transactions batched in Syncs: 0 Number of syncs: 252 SyncTimes(ms): 236 
2016-07-31 01:14:05,936 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.d06ef3b4-873f-464d-9cd0-e360da48e194 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:14:15,944 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.32ccba74-5f6c-45fc-b5db-26efb1b840e2 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:14:25,952 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.fef919cd-9952-4af8-a49a-e6dd2aa032f1 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:14:35,961 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.77ffdf36-8e42-43d8-9c1f-df6f3d11700d is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:14:45,968 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.c31cfcbb-b47c-4169-ab0f-7ae87d4f815d is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:14:55,976 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.6429570d-fb0a-4117-bb12-127a67e0a0b7 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:15:05,981 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 280 Total time for transactions(ms): 6 Number of transactions batched in Syncs: 0 Number of syncs: 270 SyncTimes(ms): 253 
2016-07-31 01:15:05,984 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.8030b18d-05f2-4520-b5c4-2fe42338b92b is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:15:15,991 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.f608a0f4-e730-43cd-a19d-da57caac346e is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:15:25,999 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.9d5a1f80-2f2a-43a7-84f1-b26a8c90a98f is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:15:36,007 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.279e96fc-180c-47a5-a3ba-cfda581eedad is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:15:46,015 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.a85bbf52-61f4-4899-98b1-23615a549774 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:15:56,023 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.80613e8e-7015-4aeb-81df-49884bd0eb5e is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:16:06,028 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 298 Total time for transactions(ms): 6 Number of transactions batched in Syncs: 0 Number of syncs: 288 SyncTimes(ms): 267 
2016-07-31 01:16:06,031 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.2be7fc48-bd1c-4042-88e4-239b1c630458 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:16:16,038 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.40fc68a6-f003-4e35-b4b3-50bd3c4a0c82 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:16:26,045 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.97e7d15c-4d28-4089-b4a5-9f0935a72589 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:16:36,052 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.84d8e78d-90fd-419f-9000-fa04ab56955e is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:16:46,059 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.6691cc3e-6969-4a8f-938f-272d1c96701d is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:16:56,066 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.077143b6-281a-468c-8b2c-bcb6cd3bc27a is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:17:06,070 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 316 Total time for transactions(ms): 6 Number of transactions batched in Syncs: 0 Number of syncs: 306 SyncTimes(ms): 284 
2016-07-31 01:17:06,073 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.817d1886-aea2-450a-a586-08677dc18d60 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:17:16,080 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.abd46886-1359-4c5e-8276-ea4f2969411f is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:17:26,087 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.24625260-59be-4a9b-b47b-b8d5b76cb789 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:17:36,096 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.11630782-e50e-4260-a0da-99845bc3f1db is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:17:46,103 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.16cdd027-f1b8-4cbf-a30c-2f1712f4abb5 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:17:56,111 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.93fb2e86-2fec-4069-b73b-632750fda603 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:18:06,116 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 334 Total time for transactions(ms): 6 Number of transactions batched in Syncs: 0 Number of syncs: 324 SyncTimes(ms): 300 
2016-07-31 01:18:06,119 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.b19fddda-ea90-49ab-b44d-434cce28cb67 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:18:16,127 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.d81ab189-bde5-4878-b82b-903983466f86 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:18:26,135 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.e5b51632-f714-4814-b896-59bba137b42d is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:18:36,144 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.39791121-9399-4a22-a50c-90eaddf31ffb is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:18:46,153 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.861c269b-5466-4855-84fd-587ed3306012 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:18:56,162 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.8a9ff721-bd56-4bea-b399-31bfaabe8c7c is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:19:06,168 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 352 Total time for transactions(ms): 7 Number of transactions batched in Syncs: 0 Number of syncs: 342 SyncTimes(ms): 313 
2016-07-31 01:19:06,170 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.492bf987-4991-4533-80e2-678efa843cb9 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:19:16,178 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.9294c0c6-43db-4f6d-9d31-f493143b6baf is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:19:26,187 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.341dd131-c14c-4147-bcbc-849d1d6bba8c is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:19:36,196 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.56f92e8e-ef93-4279-a57f-472dd5d8f399 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:19:46,204 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.5ddcda82-b501-4043-bb54-a29902d9d234 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:19:56,212 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.31e7517b-2ef3-458c-9979-324d7a96302f is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:20:06,218 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 370 Total time for transactions(ms): 7 Number of transactions batched in Syncs: 0 Number of syncs: 360 SyncTimes(ms): 329 
2016-07-31 01:20:06,220 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.5251f5df-0957-4008-b664-8d82eaa9789e is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:20:16,229 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.3320b948-2478-4807-9ab3-d23e4945765e is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:20:26,237 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.0928c940-e57d-4a34-a7dc-53dade7ff909 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:20:36,246 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.6240fcdf-696e-49c4-a883-3eda5ab89b4d is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:20:46,254 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.5622850e-b7b0-458a-9ffa-89e134fa3fda is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:20:56,262 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.faa076e8-490c-489f-8183-778325e0b144 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:21:06,268 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 388 Total time for transactions(ms): 7 Number of transactions batched in Syncs: 0 Number of syncs: 378 SyncTimes(ms): 347 
2016-07-31 01:21:06,270 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.18b2464e-9d14-4bae-95d9-f261edbdee1b is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:21:16,278 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.6c53dd52-3996-4541-b368-e8406f99f68e is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:21:26,287 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.8b5ac93c-b268-432d-9236-48c004c33743 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:21:36,303 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.22a03e6f-4531-466c-af28-e0797d6b803e is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:21:46,311 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.1df0d173-6432-481f-af97-6632660700b0 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:21:56,319 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.4095d5a1-ba2d-4966-ad13-99843c51ee91 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:22:06,325 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 406 Total time for transactions(ms): 8 Number of transactions batched in Syncs: 0 Number of syncs: 396 SyncTimes(ms): 362 
2016-07-31 01:22:06,328 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.f35e73f9-842d-4fc2-96b3-9b70df17e7e3 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:22:16,337 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.fdfa32ef-5c3c-48a3-9d15-0edc1b9d5072 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:22:26,345 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.0315d9f7-ea5c-4a58-ad68-3f942d97676a is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:22:36,353 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.eecbddee-6bfb-44b6-97ef-1b5eece8a982 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:22:46,362 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.2c363a5b-bd43-47c5-9050-b15f1f6ade77 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-31 01:22:56,449 INFO org.apache.hadoop.hdfs.StateChange: DIR* completeFile: /user/spark/eventlog/.6f4e8030-39a7-4fc8-b551-5b1d88e0885e is closed by DFSClient_NONMAPREDUCE_-1615501432_1

mvn.128.no/maven2 unreachable

Hi, I'm in the mvn package step and it looks like the repository mvn.189.no/maven2 is unreachable. This is the error:

[ERROR] Failed to execute goal on project SparkBWA: Could not resolve dependencies for project com.github.sparkbwa:SparkBWA:jar:0.2: Failed to collect dependencies at org.apache.spark:spark-core_2.11:jar:2.3.3 -> net.java.dev.jets3t:jets3t:jar:0.9.4 -> commons-codec:commons-codec:jar:1.15-SNAPSHOT: Failed to read artifact descriptor for commons-codec:commons-codec:jar:1.15-SNAPSHOT: Could not transfer artifact commons-codec:commons-codec:pom:1.15-SNAPSHOT from/to 128 (https://mvn.128.no/maven2): Network is unreachable (connect failed) -> [Help 1]

This is my maven --version:

Apache Maven 3.6.0 (97c98ec64a1fdfee7767ce5ffb20918da4f719f3; 2018-10-24T20:41:47+02:00)
Maven home: /opt/cesga/maven/apache-maven-3.6.0
Java version: 1.8.0_191, vendor: Oracle Corporation, runtime: /usr/java/jdk1.8.0_191-amd64/jre
Default locale: en_US, platform encoding: ANSI_X3.4-1968
OS name: "linux", version: "3.10.0-862.9.1.el7.x86_64", arch: "amd64", family: "unix"

Any ideas on how to fix it? Thank you.

Need advise run to SparkBWA with Big data set of HG

To run this SparkBWA application with some 10K records each in two paried files , I am faciing below issue but it ran sucessfully with 100 records each file. Though its not purely not SparkBWA issue,but if you can highlight some configuration setting that will help to run this. I have enough memory to run 300MB files.
16/06/30 12:00:40 INFO spark.MapOutputTrackerWorker: Got the output locations
16/06/30 12:00:40 INFO storage.ShuffleBlockFetcherIterator: Getting 0 non-empty blocks out of 1 blocks
16/06/30 12:00:40 INFO storage.ShuffleBlockFetcherIterator: Started 0 remote fetches in 1 ms
16/06/30 12:00:40 INFO storage.MemoryStore: Block rdd_9_2 stored as values in memory (estimated size 16.0 B, free 12.2 KB)
16/06/30 12:00:40 INFO BwaInterpreter: JMAbuin:: Writing file: /home/hadoop/hadoop_tempdir/application_1467200285413_0004-RDD2_1
16/06/30 12:00:40 INFO BwaInterpreter: JMAbuin:: Writing file: /home/hadoop/hadoop_tempdir/application_1467200285413_0004-RDD2_2
[Java_BwaJni_bwa_1jni] Arg 0 'bwa'
[Java_BwaJni_bwa_1jni] Algorithm found 1 'mem'
[Java_BwaJni_bwa_1jni] Arg 1 'mem'
[Java_BwaJni_bwa_1jni] Filename parameter -f found 2 '-f'
[Java_BwaJni_bwa_1jni] Arg 2 '-f'
[Java_BwaJni_bwa_1jni] Filename found 3 '/home/hadoop/hadoop_tempdir/SparkBWA_1_130711_FLOWCEL_10Kreads_1.fastq-32-NoSort-application_1467200285413_0004-2.sam'
[Java_BwaJni_bwa_1jni] Arg 3 '/home/hadoop/hadoop_tempdir/SparkBWA_1_130711_FLOWCEL_10Kreads_1.fastq-32-NoSort-application_1467200285413_0004-2.sam'
[Java_BwaJni_bwa_1jni] Arg 4 '/Data/HumanBase/HG'
[Java_BwaJni_bwa_1jni] Arg 5 '/home/hadoop/hadoop_tempdir/application_1467200285413_0004-RDD2_1'
[Java_BwaJni_bwa_1jni] Arg 6 '/home/hadoop/hadoop_tempdir/application_1467200285413_0004-RDD2_2'
[Java_BwaJni_bwa_1jni] option[0]: bwa.
[Java_BwaJni_bwa_1jni] option[1]: mem.
[Java_BwaJni_bwa_1jni] option[2]: /Data/HumanBase/HG.
[Java_BwaJni_bwa_1jni] option[3]: /home/hadoop/hadoop_tempdir/application_1467200285413_0004-RDD2_1.
[Java_BwaJni_bwa_1jni] option[4]: /home/hadoop/hadoop_tempdir/application_1467200285413_0004-RDD2_2.
[M::bwa_idx_load_from_disk] read 0 ALT contigs
[gzclose] buffer error

16/06/30 11:59:51 INFO cluster.YarnClientSchedulerBackend: Disabling executor 1.
16/06/30 11:59:51 INFO scheduler.DAGScheduler: Executor lost: 1 (epoch 3)
16/06/30 11:59:51 INFO storage.BlockManagerMasterEndpoint: Trying to remove executor 1 from BlockManagerMaster.
16/06/30 11:59:51 INFO storage.BlockManagerMasterEndpoint: Removing block manager BlockManagerId(1, client19.example.com, 54545)
16/06/30 11:59:51 INFO storage.BlockManagerMaster: Removed 1 successfully in removeExecutor
16/06/30 11:59:51 INFO scheduler.ShuffleMapStage: ShuffleMapStage 2 is now unavailable on executor 1 (0/1, false)
16/06/30 11:59:51 INFO scheduler.ShuffleMapStage: ShuffleMapStage 0 is now unavailable on executor 1 (0/1, false)
16/06/30 11:59:51 INFO scheduler.ShuffleMapStage: ShuffleMapStage 1 is now unavailable on executor 1 (0/1, false)
16/06/30 11:59:51 INFO cluster.YarnClientSchedulerBackend: Disabling executor 6.
16/06/30 11:59:51 INFO scheduler.DAGScheduler: Executor lost: 6 (epoch 6)
16/06/30 11:59:51 INFO storage.BlockManagerMasterEndpoint: Trying to remove executor 6 from BlockManagerMaster.
16/06/30 11:59:51 INFO storage.BlockManagerMasterEndpoint: Removing block manager BlockManagerId(6, client17.example.com, 36481)
16/06/30 11:59:51 INFO storage.BlockManagerMaster: Removed 6 successfully in removeExecutor
16/06/30 11:59:52 INFO cluster.YarnClientSchedulerBackend: Disabling executor 2.
16/06/30 11:59:52 INFO scheduler.DAGScheduler: Executor lost: 2 (epoch 9)
16/06/30 11:59:52 INFO storage.BlockManagerMasterEndpoint: Trying to remove executor 2 from BlockManagerMaster.
16/06/30 11:59:52 INFO storage.BlockManagerMasterEndpoint: Removing block manager BlockManagerId(2, client16.example.com, 59865)
16/06/30 11:59:52 INFO storage.BlockManagerMaster: Removed 2 successfully in removeExecutor
16/06/30 11:59:52 ERROR cluster.YarnScheduler: Lost executor 6 on client17.example.com: Container marked as failed: container_1467200285413_0004_01_000007 on host: client17.example.com. Exit status: 1. Diagnostics: Exception from container-launch.
Container id: container_1467200285413_0004_01_000007
Exit code: 1

Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Container exited with a non-zero exit code 1

Can't run SparkBWA on Amazon EMR Yarn cluster

Hi,

Thanks for this repo.

I’m trying to run SparkBWA on Amazon EMR Yarn cluster, but I got many errors.

I wrote yarn instead of yarn-cluster and also I wrote the --deploy-mode cluster

Then, I got the following error:

[hadoop@ip-172-31-14-100 ~]$ spark-submit --class com.github.sparkbwa.SparkBWA --master yarn --deploy-mode cluster --driver-memory 1500m --executor-memory 10g --executor-cores 1 --verbose --num-executors 16 sparkbwa-1.0.jar -m -r -p --index /Data/HumanBase/hg38 -n 16 -w "-R @RG\tID:foo\tLB:bar\tPL:illumina\tPU:illumina\tSM:ERR000589" ERR000589_1.filt.fastq ERR000589_2.filt.fastq Output_ERR000589
Using properties file: /usr/lib/spark/conf/spark-defaults.conf
Adding default property: spark.sql.warehouse.dir=*********(redacted)
Adding default property: spark.executor.extraJavaOptions=-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled -XX:OnOutOfMemoryError='kill -9 %p'
Adding default property: spark.history.fs.logDirectory=hdfs:///var/log/spark/apps
Adding default property: spark.eventLog.enabled=true
Adding default property: spark.shuffle.service.enabled=true
Adding default property: spark.driver.extraLibraryPath=/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native
Adding default property: spark.yarn.historyServer.address=ip-172-31-14-100.eu-west-2.compute.internal:18080
Adding default property: spark.stage.attempt.ignoreOnDecommissionFetchFailure=true
Adding default property: spark.driver.memory=11171M
Adding default property: spark.executor.instances=16
Adding default property: spark.default.parallelism=256
Adding default property: spark.resourceManager.cleanupExpiredHost=true
Adding default property: spark.yarn.appMasterEnv.SPARK_PUBLIC_DNS=$(hostname -f)
Adding default property: spark.driver.extraJavaOptions=-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled -XX:OnOutOfMemoryError='kill -9 %p'
Adding default property: spark.master=yarn
Adding default property: spark.blacklist.decommissioning.timeout=1h
Adding default property: spark.executor.extraLibraryPath=/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native
Adding default property: spark.sql.hive.metastore.sharedPrefixes=com.amazonaws.services.dynamodbv2
Adding default property: spark.executor.memory=10356M
Adding default property: spark.driver.extraClassPath=/usr/lib/hadoop-lzo/lib/*:/usr/lib/hadoop/hadoop-aws.jar:/usr/share/aws/aws-java-sdk/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/security/conf:/usr/share/aws/emr/security/lib/*:/usr/share/aws/hmclient/lib/aws-glue-datacatalog-spark-client.jar:/usr/share/java/Hive-JSON-Serde/hive-openx-serde.jar:/usr/share/aws/sagemaker-spark-sdk/lib/sagemaker-spark-sdk.jar
Adding default property: spark.eventLog.dir=hdfs:///var/log/spark/apps
Adding default property: spark.dynamicAllocation.enabled=true
Adding default property: spark.executor.extraClassPath=/usr/lib/hadoop-lzo/lib/*:/usr/lib/hadoop/hadoop-aws.jar:/usr/share/aws/aws-java-sdk/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/security/conf:/usr/share/aws/emr/security/lib/*:/usr/share/aws/hmclient/lib/aws-glue-datacatalog-spark-client.jar:/usr/share/java/Hive-JSON-Serde/hive-openx-serde.jar:/usr/share/aws/sagemaker-spark-sdk/lib/sagemaker-spark-sdk.jar
Adding default property: spark.executor.cores=8
Adding default property: spark.history.ui.port=18080
Adding default property: spark.blacklist.decommissioning.enabled=true
Adding default property: spark.decommissioning.timeout.threshold=20
Adding default property: spark.hadoop.yarn.timeline-service.enabled=false
Parsed arguments:
  master                  yarn
  deployMode              cluster
  executorMemory          10g
  executorCores           1
  totalExecutorCores      null
  propertiesFile          /usr/lib/spark/conf/spark-defaults.conf
  driverMemory            1500m
  driverCores             null
  driverExtraClassPath    /usr/lib/hadoop-lzo/lib/*:/usr/lib/hadoop/hadoop-aws.jar:/usr/share/aws/aws-java-sdk/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/security/conf:/usr/share/aws/emr/security/lib/*:/usr/share/aws/hmclient/lib/aws-glue-datacatalog-spark-client.jar:/usr/share/java/Hive-JSON-Serde/hive-openx-serde.jar:/usr/share/aws/sagemaker-spark-sdk/lib/sagemaker-spark-sdk.jar
  driverExtraLibraryPath  /usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native
  driverExtraJavaOptions  -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled -XX:OnOutOfMemoryError='kill -9 %p'
  supervise               false
  queue                   null
  numExecutors            16
  files                   null
  pyFiles                 null
  archives                null
  mainClass               com.github.sparkbwa.SparkBWA
  primaryResource         file:/home/hadoop/sparkbwa-1.0.jar
  name                    com.github.sparkbwa.SparkBWA
  childArgs               [-m -r -p --index /Data/HumanBase/hg38 -n 16 -w -R @RG\tID:foo\tLB:bar\tPL:illumina\tPU:illumina\tSM:ERR000589 ERR000589_1.filt.fastq ERR000589_2.filt.fastq Output_ERR000589]
  jars                    null
  packages                null
  packagesExclusions      null
  repositories            null
  verbose                 true

Spark properties used, including those specified through
 --conf and those from the properties file /usr/lib/spark/conf/spark-defaults.conf:
  (spark.blacklist.decommissioning.timeout,1h)
  (spark.executor.extraLibraryPath,/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native)
  (spark.default.parallelism,256)
  (spark.blacklist.decommissioning.enabled,true)
  (spark.hadoop.yarn.timeline-service.enabled,false)
  (spark.driver.memory,1500m)
  (spark.executor.memory,10356M)
  (spark.executor.instances,16)
  (spark.sql.warehouse.dir,*********(redacted))
  (spark.driver.extraLibraryPath,/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native)
  (spark.yarn.historyServer.address,ip-172-31-14-100.eu-west-2.compute.internal:18080)
  (spark.eventLog.enabled,true)
  (spark.stage.attempt.ignoreOnDecommissionFetchFailure,true)
  (spark.history.ui.port,18080)
  (spark.yarn.appMasterEnv.SPARK_PUBLIC_DNS,$(hostname -f))
  (spark.executor.extraJavaOptions,-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled -XX:OnOutOfMemoryError='kill -9 %p')
  (spark.resourceManager.cleanupExpiredHost,true)
  (spark.shuffle.service.enabled,true)
  (spark.history.fs.logDirectory,hdfs:///var/log/spark/apps)
  (spark.driver.extraJavaOptions,-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled -XX:OnOutOfMemoryError='kill -9 %p')
  (spark.executor.extraClassPath,/usr/lib/hadoop-lzo/lib/*:/usr/lib/hadoop/hadoop-aws.jar:/usr/share/aws/aws-java-sdk/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/security/conf:/usr/share/aws/emr/security/lib/*:/usr/share/aws/hmclient/lib/aws-glue-datacatalog-spark-client.jar:/usr/share/java/Hive-JSON-Serde/hive-openx-serde.jar:/usr/share/aws/sagemaker-spark-sdk/lib/sagemaker-spark-sdk.jar)
  (spark.sql.hive.metastore.sharedPrefixes,com.amazonaws.services.dynamodbv2)
  (spark.eventLog.dir,hdfs:///var/log/spark/apps)
  (spark.master,yarn)
  (spark.dynamicAllocation.enabled,true)
  (spark.executor.cores,8)
  (spark.decommissioning.timeout.threshold,20)
  (spark.driver.extraClassPath,/usr/lib/hadoop-lzo/lib/*:/usr/lib/hadoop/hadoop-aws.jar:/usr/share/aws/aws-java-sdk/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/security/conf:/usr/share/aws/emr/security/lib/*:/usr/share/aws/hmclient/lib/aws-glue-datacatalog-spark-client.jar:/usr/share/java/Hive-JSON-Serde/hive-openx-serde.jar:/usr/share/aws/sagemaker-spark-sdk/lib/sagemaker-spark-sdk.jar)

    
Main class:
org.apache.spark.deploy.yarn.Client
Arguments:
--jar
file:/home/hadoop/sparkbwa-1.0.jar
--class
com.github.sparkbwa.SparkBWA
--arg
-m
--arg
-r
--arg
-p
--arg
--index
--arg
/Data/HumanBase/hg38
--arg
-n
--arg
16
--arg
-w
--arg
-R @RG\tID:foo\tLB:bar\tPL:illumina\tPU:illumina\tSM:ERR000589
--arg
ERR000589_1.filt.fastq
--arg
ERR000589_2.filt.fastq
--arg
Output_ERR000589
System properties:
(spark.blacklist.decommissioning.timeout,1h)
(spark.executor.extraLibraryPath,/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native)
(spark.default.parallelism,256)
(spark.blacklist.decommissioning.enabled,true)
(spark.hadoop.yarn.timeline-service.enabled,false)
(spark.driver.memory,1500m)
(spark.executor.memory,10g)
(spark.executor.instances,16)
(spark.driver.extraLibraryPath,/usr/lib/hadoop/lib/native:/usr/lib/hadoop-lzo/lib/native)
(spark.sql.warehouse.dir,*********(redacted))
(spark.yarn.historyServer.address,ip-172-31-14-100.eu-west-2.compute.internal:18080)
(spark.eventLog.enabled,true)
(spark.stage.attempt.ignoreOnDecommissionFetchFailure,true)
(spark.history.ui.port,18080)
(spark.yarn.appMasterEnv.SPARK_PUBLIC_DNS,$(hostname -f))
(SPARK_SUBMIT,true)
(spark.executor.extraJavaOptions,-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled -XX:OnOutOfMemoryError='kill -9 %p')
(spark.app.name,com.github.sparkbwa.SparkBWA)
(spark.resourceManager.cleanupExpiredHost,true)
(spark.history.fs.logDirectory,hdfs:///var/log/spark/apps)
(spark.shuffle.service.enabled,true)
(spark.driver.extraJavaOptions,-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70 -XX:+CMSClassUnloadingEnabled -XX:OnOutOfMemoryError='kill -9 %p')
(spark.submit.deployMode,cluster)
(spark.executor.extraClassPath,/usr/lib/hadoop-lzo/lib/*:/usr/lib/hadoop/hadoop-aws.jar:/usr/share/aws/aws-java-sdk/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/security/conf:/usr/share/aws/emr/security/lib/*:/usr/share/aws/hmclient/lib/aws-glue-datacatalog-spark-client.jar:/usr/share/java/Hive-JSON-Serde/hive-openx-serde.jar:/usr/share/aws/sagemaker-spark-sdk/lib/sagemaker-spark-sdk.jar)
(spark.eventLog.dir,hdfs:///var/log/spark/apps)
(spark.sql.hive.metastore.sharedPrefixes,com.amazonaws.services.dynamodbv2)
(spark.master,yarn)
(spark.dynamicAllocation.enabled,true)
(spark.decommissioning.timeout.threshold,20)
(spark.executor.cores,1)
(spark.driver.extraClassPath,/usr/lib/hadoop-lzo/lib/*:/usr/lib/hadoop/hadoop-aws.jar:/usr/share/aws/aws-java-sdk/*:/usr/share/aws/emr/emrfs/conf:/usr/share/aws/emr/emrfs/lib/*:/usr/share/aws/emr/emrfs/auxlib/*:/usr/share/aws/emr/security/conf:/usr/share/aws/emr/security/lib/*:/usr/share/aws/hmclient/lib/aws-glue-datacatalog-spark-client.jar:/usr/share/java/Hive-JSON-Serde/hive-openx-serde.jar:/usr/share/aws/sagemaker-spark-sdk/lib/sagemaker-spark-sdk.jar)
Classpath elements:
file:/home/hadoop/sparkbwa-1.0.jar


18/01/20 15:53:12 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/01/20 15:53:13 INFO RMProxy: Connecting to ResourceManager at ip-172-31-14-100.eu-west-2.compute.internal/172.31.14.100:8032
18/01/20 15:53:13 INFO Client: Requesting a new application from cluster with 16 NodeManagers
18/01/20 15:53:13 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (12288 MB per container)
18/01/20 15:53:13 INFO Client: Will allocate AM container, with 1884 MB memory including 384 MB overhead
18/01/20 15:53:13 INFO Client: Setting up container launch context for our AM
18/01/20 15:53:13 INFO Client: Setting up the launch environment for our AM container
18/01/20 15:53:13 INFO Client: Preparing resources for our AM container
18/01/20 15:53:14 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
18/01/20 15:53:16 INFO Client: Uploading resource file:/mnt/tmp/spark-8adea679-22d7-4945-9708-d61ef96b2c2a/__spark_libs__3181673287761365885.zip -> hdfs://ip-172-31-14-100.eu-west-2.compute.internal:8020/user/hadoop/.sparkStaging/application_1516463115359_0001/__spark_libs__3181673287761365885.zip
18/01/20 15:53:17 INFO Client: Uploading resource file:/home/hadoop/sparkbwa-1.0.jar -> hdfs://ip-172-31-14-100.eu-west-2.compute.internal:8020/user/hadoop/.sparkStaging/application_1516463115359_0001/sparkbwa-1.0.jar
18/01/20 15:53:17 INFO Client: Uploading resource file:/mnt/tmp/spark-8adea679-22d7-4945-9708-d61ef96b2c2a/__spark_conf__4991143839440201874.zip -> hdfs://ip-172-31-14-100.eu-west-2.compute.internal:8020/user/hadoop/.sparkStaging/application_1516463115359_0001/__spark_conf__.zip
18/01/20 15:53:17 INFO SecurityManager: Changing view acls to: hadoop
18/01/20 15:53:17 INFO SecurityManager: Changing modify acls to: hadoop
18/01/20 15:53:17 INFO SecurityManager: Changing view acls groups to: 
18/01/20 15:53:17 INFO SecurityManager: Changing modify acls groups to: 
18/01/20 15:53:17 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(hadoop); groups with view permissions: Set(); users  with modify permissions: Set(hadoop); groups with modify permissions: Set()
18/01/20 15:53:17 INFO Client: Submitting application application_1516463115359_0001 to ResourceManager
18/01/20 15:53:18 INFO YarnClientImpl: Submitted application application_1516463115359_0001
18/01/20 15:53:19 INFO Client: Application report for application_1516463115359_0001 (state: ACCEPTED)
18/01/20 15:53:19 INFO Client: 
	 client token: N/A
	 diagnostics: N/A
	 ApplicationMaster host: N/A
	 ApplicationMaster RPC port: -1
	 queue: default
	 start time: 1516463597765
	 final status: UNDEFINED
	 tracking URL: http://ip-172-31-14-100.eu-west-2.compute.internal:20888/proxy/application_1516463115359_0001/
	 user: hadoop
18/01/20 15:53:20 INFO Client: Application report for application_1516463115359_0001 (state: ACCEPTED)
18/01/20 15:53:21 INFO Client: Application report for application_1516463115359_0001 (state: ACCEPTED)
18/01/20 15:53:22 INFO Client: Application report for application_1516463115359_0001 (state: ACCEPTED)
18/01/20 15:53:23 INFO Client: Application report for application_1516463115359_0001 (state: ACCEPTED)
18/01/20 15:53:24 INFO Client: Application report for application_1516463115359_0001 (state: ACCEPTED)
18/01/20 15:53:25 INFO Client: Application report for application_1516463115359_0001 (state: ACCEPTED)
18/01/20 15:53:26 INFO Client: Application report for application_1516463115359_0001 (state: ACCEPTED)
18/01/20 15:53:27 INFO Client: Application report for application_1516463115359_0001 (state: ACCEPTED)
18/01/20 15:53:28 INFO Client: Application report for application_1516463115359_0001 (state: ACCEPTED)
18/01/20 15:53:29 INFO Client: Application report for application_1516463115359_0001 (state: FAILED)
18/01/20 15:53:29 INFO Client: 
	 client token: N/A
	 diagnostics: Application application_1516463115359_0001 failed 2 times due to AM Container for appattempt_1516463115359_0001_000002 exited with  exitCode: 1
For more detailed output, check application tracking page:http://ip-172-31-14-100.eu-west-2.compute.internal:8088/cluster/app/application_1516463115359_0001Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1516463115359_0001_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
	at org.apache.hadoop.util.Shell.run(Shell.java:479)
	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
	at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)


Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
	 ApplicationMaster host: N/A
	 ApplicationMaster RPC port: -1
	 queue: default
	 start time: 1516463597765
	 final status: FAILED
	 tracking URL: http://ip-172-31-14-100.eu-west-2.compute.internal:8088/cluster/app/application_1516463115359_0001
	 user: hadoop
Exception in thread "main" org.apache.spark.SparkException: Application application_1516463115359_0001 finished with failed status
	at org.apache.spark.deploy.yarn.Client.run(Client.scala:1122)
	at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1168)
	at org.apache.spark.deploy.yarn.Client.main(Client.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:775)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
18/01/20 15:53:29 INFO ShutdownHookManager: Shutdown hook called
18/01/20 15:53:29 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-8adea679-22d7-4945-9708-d61ef96b2c2a
[hadoop@ip-172-31-14-100 ~]$ 
Broadcast message from root@ip-172-31-14-100
	(unknown) at 15:54 ...

The system is going down for power off NOW!
Connection to ec2-35-177-163-135.eu-west-2.compute.amazonaws.com closed by remote host.
Connection to ec2-35-177-163-135.eu-west-2.compute.amazonaws.com closed.

Any help will be appropriated

Thank you 🙏

ERROR: java.lang.ClassNotFoundException: com.github.sparkbwa.FASTQRecordGrouper

Hi, I see this rather often. I am pasting example logs for failed executors and the respective tasks. The input data is not that big - few GB each file in the paired read. It does not matters paired, or not. This test run I have successfully done many times before. Thanks. Best regards.

The only left running executor is stuck on this:

[M::mem_process_seqs] Processed 196080 reads in 43.536 CPU sec, 42.899 real sec
[M::process] read 196080 sequences (10000080 bp)...
[M::mem_pestat] # candidate unique pairs for (FF, FR, RF, RR): (5, 67822, 5, 2)
[M::mem_pestat] skip orientation FF as there are not enough pairs
[M::mem_pestat] analyzing insert size distribution for orientation FR...
[M::mem_pestat] (25, 50, 75) percentile: (214, 229, 245)
[M::mem_pestat] low and high boundaries for computing mean and std.dev: (152, 307)
[M::mem_pestat] mean and std.dev: (229.69, 20.09)
[M::mem_pestat] low and high boundaries for proper pairs: (121, 338)
[M::mem_pestat] skip orientation RF as there are not enough pairs
[M::mem_pestat] skip orientation RR as there are not enough pairs

Execution command in terminal

spark-submit --conf spark.ui.enabled=true --class com.github.sparkbwa.SparkBWA --master yarn --deploy-mode cluster --driver-memory 40G --executor-memory 20G --executor-cores 4 --num-executors 100 --verbose $EBROOTSPARKBWA/SparkBWA-0.2.jar -m -r -p --index ${Index_File} -n 1 data/ERR000589_1.filt.fastq data/ERR000589_2.filt.fastq data/Output_ERR000589_sam

Spark jobs from web UI

Spark Jobs (?)

Total Uptime: 33 min
Scheduling Mode: FIFO
Active Jobs: 1
Completed Jobs: 2

Event Timeline
Active Jobs (1)
Job Id Description Submitted Duration Stages: Succeeded/Total Tasks (for all stages): Succeeded/Total
2 collect at BwaInterpreter.java:305 2017/08/02 10:31:54 32 min 3/4
42/43 (9 failed)
Completed Jobs (2)
Job Id Description Submitted Duration Stages: Succeeded/Total Tasks (for all stages): Succeeded/Total
1 zipWithIndex at BwaInterpreter.java:152 2017/08/02 10:31:52 2 s 1/1
13/13
0 zipWithIndex at BwaInterpreter.java:152 2017/08/02 10:31:48 4 s 1/1
13/13

Stages from web UI - basically stuck on the last one and running forever

Stages for All Jobs

Active Stages: 1
Completed Stages: 5

Active Stages (1)
Stage Id Description Submitted Duration Tasks: Succeeded/Total Input Output Shuffle Read Shuffle Write
5
(kill)
collect at BwaInterpreter.java:305 +details

org.apache.spark.api.java.AbstractJavaRDDLike.collect(JavaRDDLike.scala:47)
com.github.sparkbwa.BwaInterpreter.MapPairedBwa(BwaInterpreter.java:305)
com.github.sparkbwa.BwaInterpreter.runBwa(BwaInterpreter.java:334)
com.github.sparkbwa.SparkBWA.main(SparkBWA.java:37)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:606)
org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:525)

2017/08/02 10:33:27 	33 min	

0/1
2.2 GB
Completed Stages (5)
Stage Id Description Submitted Duration Tasks: Succeeded/Total Input Output Shuffle Read Shuffle Write
4
repartition at BwaInterpreter.java:281 +details

org.apache.spark.api.java.JavaPairRDD.repartition(JavaPairRDD.scala:120)
com.github.sparkbwa.BwaInterpreter.handlePairedReadsSorting(BwaInterpreter.java:281)
com.github.sparkbwa.BwaInterpreter.runBwa(BwaInterpreter.java:333)
com.github.sparkbwa.SparkBWA.main(SparkBWA.java:37)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:606)
org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:525)

2017/08/02 10:32:37 	50 s	

14/14 (5 failed)
2.7 GB 2.2 GB
3
mapToPair at BwaInterpreter.java:152 +details

org.apache.spark.api.java.AbstractJavaRDDLike.mapToPair(JavaRDDLike.scala:47)
com.github.sparkbwa.BwaInterpreter.loadFastq(BwaInterpreter.java:152)
com.github.sparkbwa.BwaInterpreter.handlePairedReadsSorting(BwaInterpreter.java:239)
com.github.sparkbwa.BwaInterpreter.runBwa(BwaInterpreter.java:333)
com.github.sparkbwa.SparkBWA.main(SparkBWA.java:37)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:606)
org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:525)

2017/08/02 10:31:54 	43 s	

14/14 (3 failed)
1722.0 MB 1392.7 MB
2
mapToPair at BwaInterpreter.java:152 +details

org.apache.spark.api.java.AbstractJavaRDDLike.mapToPair(JavaRDDLike.scala:47)
com.github.sparkbwa.BwaInterpreter.loadFastq(BwaInterpreter.java:152)
com.github.sparkbwa.BwaInterpreter.handlePairedReadsSorting(BwaInterpreter.java:238)
com.github.sparkbwa.BwaInterpreter.runBwa(BwaInterpreter.java:333)
com.github.sparkbwa.SparkBWA.main(SparkBWA.java:37)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:606)
org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:525)

2017/08/02 10:31:54 	14 s	

14/14 (1 failed)
1722.0 MB 1350.3 MB
1
zipWithIndex at BwaInterpreter.java:152 +details

org.apache.spark.api.java.AbstractJavaRDDLike.zipWithIndex(JavaRDDLike.scala:47)
com.github.sparkbwa.BwaInterpreter.loadFastq(BwaInterpreter.java:152)
com.github.sparkbwa.BwaInterpreter.handlePairedReadsSorting(BwaInterpreter.java:239)
com.github.sparkbwa.BwaInterpreter.runBwa(BwaInterpreter.java:333)
com.github.sparkbwa.SparkBWA.main(SparkBWA.java:37)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:606)
org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:525)

2017/08/02 10:31:52 	2 s	

13/13
1664.8 MB
0
zipWithIndex at BwaInterpreter.java:152 +details

org.apache.spark.api.java.AbstractJavaRDDLike.zipWithIndex(JavaRDDLike.scala:47)
com.github.sparkbwa.BwaInterpreter.loadFastq(BwaInterpreter.java:152)
com.github.sparkbwa.BwaInterpreter.handlePairedReadsSorting(BwaInterpreter.java:238)
com.github.sparkbwa.BwaInterpreter.runBwa(BwaInterpreter.java:333)
com.github.sparkbwa.SparkBWA.main(SparkBWA.java:37)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:606)
org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:525)

2017/08/02 10:31:48 	4 s	

13/13
1664.8 MB

Tasks from web UI

Active Stages (1)
Stage Id Description Submitted Duration Tasks: Succeeded/Total Input Output Shuffle Read Shuffle Write
5
(kill)
collect at BwaInterpreter.java:305 +details

org.apache.spark.api.java.AbstractJavaRDDLike.collect(JavaRDDLike.scala:47)
com.github.sparkbwa.BwaInterpreter.MapPairedBwa(BwaInterpreter.java:305)
com.github.sparkbwa.BwaInterpreter.runBwa(BwaInterpreter.java:334)
com.github.sparkbwa.SparkBWA.main(SparkBWA.java:37)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:606)
org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:525)

2017/08/02 10:33:27 	5.5 min	

0/1
2.2 GB
Completed Stages (3)
Stage Id Description Submitted Duration Tasks: Succeeded/Total Input Output Shuffle Read Shuffle Write
4
repartition at BwaInterpreter.java:281 +details

org.apache.spark.api.java.JavaPairRDD.repartition(JavaPairRDD.scala:120)
com.github.sparkbwa.BwaInterpreter.handlePairedReadsSorting(BwaInterpreter.java:281)
com.github.sparkbwa.BwaInterpreter.runBwa(BwaInterpreter.java:333)
com.github.sparkbwa.SparkBWA.main(SparkBWA.java:37)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:606)
org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:525)

2017/08/02 10:32:37 	50 s	

14/14 (5 failed)
2.7 GB 2.2 GB
3
mapToPair at BwaInterpreter.java:152 +details

org.apache.spark.api.java.AbstractJavaRDDLike.mapToPair(JavaRDDLike.scala:47)
com.github.sparkbwa.BwaInterpreter.loadFastq(BwaInterpreter.java:152)
com.github.sparkbwa.BwaInterpreter.handlePairedReadsSorting(BwaInterpreter.java:239)
com.github.sparkbwa.BwaInterpreter.runBwa(BwaInterpreter.java:333)
com.github.sparkbwa.SparkBWA.main(SparkBWA.java:37)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:606)
org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:525)

2017/08/02 10:31:54 	43 s	

14/14 (3 failed)
1722.0 MB 1392.7 MB
2
mapToPair at BwaInterpreter.java:152 +details

org.apache.spark.api.java.AbstractJavaRDDLike.mapToPair(JavaRDDLike.scala:47)
com.github.sparkbwa.BwaInterpreter.loadFastq(BwaInterpreter.java:152)
com.github.sparkbwa.BwaInterpreter.handlePairedReadsSorting(BwaInterpreter.java:238)
com.github.sparkbwa.BwaInterpreter.runBwa(BwaInterpreter.java:333)
com.github.sparkbwa.SparkBWA.main(SparkBWA.java:37)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:606)
org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:525)

2017/08/02 10:31:54 	14 s	

14/14 (1 failed)
1722.0 MB 1350.3 MB

Failed task log

java.lang.ClassNotFoundException: com.github.sparkbwa.FASTQRecordGrouper
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:278)
at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:67)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1612)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:72)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:98)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:64)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

** The container + executor log**

17/08/02 10:32:37 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 75
17/08/02 10:32:37 INFO executor.Executor: Running task 0.2 in stage 4.0 (TID 75)
17/08/02 10:32:37 ERROR executor.Executor: Exception in task 0.2 in stage 4.0 (TID 75)
java.lang.ClassNotFoundException: com.github.sparkbwa.FASTQRecordGrouper
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:278)
at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:67)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1612)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
at scala.collection.immutable.$colon$colon.readObject(List.scala:362)
at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
at scala.collection.immutable.$colon$colon.readObject(List.scala:362)
at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
at scala.collection.immutable.$colon$colon.readObject(List.scala:362)
at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
at scala.collection.immutable.$colon$colon.readObject(List.scala:362)
at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
at scala.collection.immutable.$colon$colon.readObject(List.scala:362)
at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
at scala.collection.immutable.$colon$colon.readObject(List.scala:362)
at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
at scala.collection.immutable.$colon$colon.readObject(List.scala:362)
at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:72)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:98)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:64)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Build fails due to certificate problems in Maven package

This is the error screen output.
Best regards.

[INFO] ------------------------------------------------------------------------
[INFO] Building SparkBWA 0.2
[INFO] ------------------------------------------------------------------------
Downloading: https://mvn.128.no/maven2/cz/adamh/utils/native-utils/1.0-SNAPSHOT/maven-metadata.xml
[WARNING] Could not transfer metadata cz.adamh.utils:native-utils:1.0-SNAPSHOT/maven-metadata.xml from/to 128 (https://mvn.128.no/maven2): hostname in certificate didn't match: <mvn.128.no> != <128.no> OR <128.no>
[WARNING] Failure to transfer cz.adamh.utils:native-utils:1.0-SNAPSHOT/maven-metadata.xml from https://mvn.128.no/maven2 was cached in the local repository, resolution will not be reattempted until the update interval of 128 has elapsed or updates are forced. Original error: Could not transfer metadata cz.adamh.utils:native-utils:1.0-SNAPSHOT/maven-metadata.xml from/to 128 (https://mvn.128.no/maven2): hostname in certificate didn't match: <mvn.128.no> != <128.no> OR <128.no>
Downloading: https://mvn.128.no/maven2/cz/adamh/utils/native-utils/1.0-SNAPSHOT/native-utils-1.0-SNAPSHOT.pom
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 3.596s
[INFO] Finished at: Thu Jan 19 08:13:45 CET 2017
[INFO] Final Memory: 20M/161M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal on project SparkBWA: Could not resolve dependencies for project com.github.sparkbwa:SparkBWA:jar:0.2: Failed to collect dependencies for [org.apache.spark:spark-core_2.11:jar:2.0.1 (compile), cz.adamh.utils:native-utils:jar:1.0-SNAPSHOT (compile)]: Failed to read artifact descriptor for cz.adamh.utils:native-utils:jar:1.0-SNAPSHOT: Could not transfer artifact cz.adamh.utils:native-utils:pom:1.0-SNAPSHOT from/to 128 (https://mvn.128.no/maven2): hostname in certificate didn't match: <mvn.128.no> != <128.no> OR <128.no> -> [Help 1]

Error while running master as local

When I run the command:
spark-submit --class com.github.sparkbwa.SparkBWA --master local
--driver-memory 1500m --executor-memory 1g --executor-cores 2 --verbose
--num-executors 1 /hdd/diksha/SparkBWA-0.2.jar -m -r -s --index /hdd/diksha/hg38 -n 1
/user/dreamlab/diksha_sample/1.fastq /user/dreamlab/diksha_sample/
I get the following error:
java.io.FileNotFoundException: File does not exist: /user/dreamlab/diksha_sample/1.fastq

The file was present on that path before running the above command. After the command is run, the file from the path gets deleted and I get the error- 'file not found'.

How to deal with the issue?

Problem running SparkBWA

I'm a beginner in learning bid data tools in genomics and I don't know what's the problem
this is a copy of the command and the error

root@soma-HP-Pavilion-TS-15-Notebook-PC:~/SparkBWA# $SPARK_HOME/bin/spark-submit --class SparkBWA --master yarn-client --driver-memory 1500m --executor-memory 1500m --executor-cores 1 --archives build/bwa.zip --verbose --num-executors 32 build/SparkBWA.jar -algorithm mem -reads paired -index /Data/HumanBase/hg38 -partitions 32 ERR000589_1.filt.fastq ERR000589_2.filt.fastq Output_ERR000589
Using properties file: /usr/local/spark/conf/spark-defaults.conf
Adding default property: spark.executor.extraClassPath=/home/soma/SparkBWA/build/SparkBWA.jar
Adding default property: spark.driver.extraClassPath=/home/soma/SparkBWA/build/SparkBWA.jar
Adding default property: spark.yarn.executor.memoryOverhead=8704
Adding default property: spark.executor.extraJavaOptions=-Djava.library.path=/home/soma/SparkBWA/build/bwa.zip
Parsed arguments:
master yarn-client
deployMode null
executorMemory 1500m
executorCores 1
totalExecutorCores null
propertiesFile /usr/local/spark/conf/spark-defaults.conf
driverMemory 1500m
driverCores null
driverExtraClassPath /home/soma/SparkBWA/build/SparkBWA.jar
driverExtraLibraryPath null
driverExtraJavaOptions null
supervise false
queue null
numExecutors 32
files null
pyFiles null
archives file:/home/soma/SparkBWA/build/bwa.zip
mainClass SparkBWA
primaryResource file:/home/soma/SparkBWA/build/SparkBWA.jar
name SparkBWA
childArgs [-algorithm mem -reads paired -index /Data/HumanBase/hg38 -partitions 32 ERR000589_1.filt.fastq ERR000589_2.filt.fastq Output_ERR000589]
jars null
packages null
packagesExclusions null
repositories null
verbose true

Spark properties used, including those specified through
--conf and those from the properties file /usr/local/spark/conf/spark-defaults.conf:
spark.driver.memory -> 1500m
spark.executor.extraJavaOptions -> -Djava.library.path=/home/soma/SparkBWA/build/bwa.zip
spark.yarn.executor.memoryOverhead -> 8704
spark.executor.extraClassPath -> /home/soma/SparkBWA/build/SparkBWA.jar
spark.driver.extraClassPath -> /home/soma/SparkBWA/build/SparkBWA.jar

Main class:
SparkBWA
Arguments:
-algorithm
mem
-reads
paired
-index
/Data/HumanBase/hg38
-partitions
32
ERR000589_1.filt.fastq
ERR000589_2.filt.fastq
Output_ERR000589
System properties:
spark.driver.memory -> 1500m
spark.executor.memory -> 1500m
spark.executor.instances -> 32
SPARK_SUBMIT -> true
spark.executor.extraJavaOptions -> -Djava.library.path=/home/soma/SparkBWA/build/bwa.zip
spark.app.name -> SparkBWA
spark.yarn.executor.memoryOverhead -> 8704
spark.jars -> file:/home/soma/SparkBWA/build/SparkBWA.jar
spark.submit.deployMode -> client
spark.yarn.dist.archives -> file:/home/soma/SparkBWA/build/bwa.zip
spark.executor.extraClassPath -> /home/soma/SparkBWA/build/SparkBWA.jar
spark.master -> yarn-client
spark.executor.cores -> 1
spark.driver.extraClassPath -> /home/soma/SparkBWA/build/SparkBWA.jar
Classpath elements:
file:/home/soma/SparkBWA/build/SparkBWA.jar

16/06/29 18:35:17 INFO BwaOptions: JMAbuin:: Received argument: -algorithm
16/06/29 18:35:17 INFO BwaOptions: JMAbuin:: Received argument: mem
16/06/29 18:35:17 INFO BwaOptions: JMAbuin:: Received argument: -reads
16/06/29 18:35:17 INFO BwaOptions: JMAbuin:: Received argument: paired
16/06/29 18:35:17 INFO BwaOptions: JMAbuin:: Received argument: -index
16/06/29 18:35:17 INFO BwaOptions: JMAbuin:: Received argument: /Data/HumanBase/hg38
16/06/29 18:35:17 INFO BwaOptions: JMAbuin:: Received argument: -partitions
16/06/29 18:35:17 INFO BwaOptions: JMAbuin:: Received argument: 32
16/06/29 18:35:17 INFO BwaOptions: JMAbuin:: Received argument: ERR000589_1.filt.fastq
16/06/29 18:35:17 INFO BwaOptions: JMAbuin:: Received argument: ERR000589_2.filt.fastq
16/06/29 18:35:17 INFO BwaOptions: JMAbuin:: Received argument: Output_ERR000589
16/06/29 18:35:17 INFO spark.SparkContext: Running Spark version 1.6.1
16/06/29 18:35:18 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/06/29 18:35:18 WARN util.Utils: Your hostname, soma-HP-Pavilion-TS-15-Notebook-PC resolves to a loopback address: 127.0.1.1; using 192.168.1.2 instead (on interface wlan0)
16/06/29 18:35:18 WARN util.Utils: Set SPARK_LOCAL_IP if you need to bind to another address
16/06/29 18:35:18 INFO spark.SecurityManager: Changing view acls to: root
16/06/29 18:35:18 INFO spark.SecurityManager: Changing modify acls to: root
16/06/29 18:35:18 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
16/06/29 18:35:19 INFO util.Utils: Successfully started service 'sparkDriver' on port 56558.
16/06/29 18:35:19 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/06/29 18:35:20 INFO Remoting: Starting remoting
16/06/29 18:35:20 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:55461]
16/06/29 18:35:20 INFO util.Utils: Successfully started service 'sparkDriverActorSystem' on port 55461.
16/06/29 18:35:20 INFO spark.SparkEnv: Registering MapOutputTracker
16/06/29 18:35:20 INFO spark.SparkEnv: Registering BlockManagerMaster
16/06/29 18:35:20 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-13c7dff7-07a8-46ec-9185-022807b52f54
16/06/29 18:35:20 INFO storage.MemoryStore: MemoryStore started with capacity 853.1 MB
16/06/29 18:35:20 INFO spark.SparkEnv: Registering OutputCommitCoordinator
16/06/29 18:35:20 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/06/29 18:35:21 WARN component.AbstractLifeCycle: FAILED [email protected]:4040: java.net.BindException: Address already in use
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.spark-project.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
at org.spark-project.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
at org.spark-project.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
at org.spark-project.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at org.spark-project.jetty.server.Server.doStart(Server.java:293)
at org.spark-project.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at org.apache.spark.ui.JettyUtils$.org$apache$spark$ui$JettyUtils$$connect$1(JettyUtils.scala:252)
at org.apache.spark.ui.JettyUtils$$anonfun$5.apply(JettyUtils.scala:262)
at org.apache.spark.ui.JettyUtils$$anonfun$5.apply(JettyUtils.scala:262)
at org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:1988)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1979)
at org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:262)
at org.apache.spark.ui.WebUI.bind(WebUI.scala:136)
at org.apache.spark.SparkContext$$anonfun$13.apply(SparkContext.scala:481)
at org.apache.spark.SparkContext$$anonfun$13.apply(SparkContext.scala:481)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.SparkContext.(SparkContext.scala:481)
at org.apache.spark.api.java.JavaSparkContext.(JavaSparkContext.scala:59)
at BwaInterpreter.initInterpreter(BwaInterpreter.java:123)
at BwaInterpreter.(BwaInterpreter.java:94)
at SparkBWA.main(SparkBWA.java:25)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
16/06/29 18:35:21 WARN component.AbstractLifeCycle: FAILED org.spark-project.jetty.server.Server@23aae55: java.net.BindException: Address already in use
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.spark-project.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
at org.spark-project.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
at org.spark-project.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
at org.spark-project.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at org.spark-project.jetty.server.Server.doStart(Server.java:293)
at org.spark-project.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at org.apache.spark.ui.JettyUtils$.org$apache$spark$ui$JettyUtils$$connect$1(JettyUtils.scala:252)
at org.apache.spark.ui.JettyUtils$$anonfun$5.apply(JettyUtils.scala:262)
at org.apache.spark.ui.JettyUtils$$anonfun$5.apply(JettyUtils.scala:262)
at org.apache.spark.util.Utils$$anonfun$startServiceOnPort$1.apply$mcVI$sp(Utils.scala:1988)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at org.apache.spark.util.Utils$.startServiceOnPort(Utils.scala:1979)
at org.apache.spark.ui.JettyUtils$.startJettyServer(JettyUtils.scala:262)
at org.apache.spark.ui.WebUI.bind(WebUI.scala:136)
at org.apache.spark.SparkContext$$anonfun$13.apply(SparkContext.scala:481)
at org.apache.spark.SparkContext$$anonfun$13.apply(SparkContext.scala:481)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.SparkContext.(SparkContext.scala:481)
at org.apache.spark.api.java.JavaSparkContext.(JavaSparkContext.scala:59)
at BwaInterpreter.initInterpreter(BwaInterpreter.java:123)
at BwaInterpreter.(BwaInterpreter.java:94)
at SparkBWA.main(SparkBWA.java:25)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
16/06/29 18:35:21 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
16/06/29 18:35:21 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/api,null}
16/06/29 18:35:21 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/,null}
16/06/29 18:35:21 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/static,null}
16/06/29 18:35:21 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}
16/06/29 18:35:21 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump,null}
16/06/29 18:35:21 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/json,null}
16/06/29 18:35:21 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors,null}
16/06/29 18:35:21 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment/json,null}
16/06/29 18:35:21 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment,null}
16/06/29 18:35:21 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd/json,null}
16/06/29 18:35:21 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd,null}
16/06/29 18:35:21 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/json,null}
16/06/29 18:35:21 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage,null}
16/06/29 18:35:21 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null}
16/06/29 18:35:21 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool,null}
16/06/29 18:35:21 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null}
16/06/29 18:35:21 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage,null}
16/06/29 18:35:21 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/json,null}
16/06/29 18:35:21 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages,null}
16/06/29 18:35:21 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null}
16/06/29 18:35:21 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job,null}
16/06/29 18:35:21 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/json,null}
16/06/29 18:35:21 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs,null}
16/06/29 18:35:21 WARN util.Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
16/06/29 18:35:21 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/06/29 18:35:21 INFO server.AbstractConnector: Started [email protected]:4041
16/06/29 18:35:21 INFO util.Utils: Successfully started service 'SparkUI' on port 4041.
16/06/29 18:35:21 INFO ui.SparkUI: Started SparkUI at http://192.168.1.2:4041
16/06/29 18:35:21 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-1b8bda46-9801-4a09-9803-639fc43d964d/httpd-28fd6e5c-d9eb-4f67-80db-0b8ef0b9fd8d
16/06/29 18:35:21 INFO spark.HttpServer: Starting HTTP Server
16/06/29 18:35:21 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/06/29 18:35:21 INFO server.AbstractConnector: Started [email protected]:50017
16/06/29 18:35:21 INFO util.Utils: Successfully started service 'HTTP file server' on port 50017.
16/06/29 18:35:21 INFO spark.SparkContext: Added JAR file:/home/soma/SparkBWA/build/SparkBWA.jar at http://192.168.1.2:50017/jars/SparkBWA.jar with timestamp 1467218121375
16/06/29 18:35:21 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
16/06/29 18:35:22 INFO yarn.Client: Requesting a new application from cluster with 1 NodeManagers
16/06/29 18:35:22 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
16/06/29 18:35:23 ERROR spark.SparkContext: Error initializing SparkContext.
java.lang.IllegalArgumentException: Required executor memory (1500+8704 MB) is above the max threshold (8192 MB) of this cluster! Please check the values of 'yarn.scheduler.maximum-allocation-mb' and/or 'yarn.nodemanager.resource.memory-mb'.
at org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:283)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:139)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:57)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
at org.apache.spark.SparkContext.(SparkContext.scala:530)
at org.apache.spark.api.java.JavaSparkContext.(JavaSparkContext.scala:59)
at BwaInterpreter.initInterpreter(BwaInterpreter.java:123)
at BwaInterpreter.(BwaInterpreter.java:94)
at SparkBWA.main(SparkBWA.java:25)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
16/06/29 18:35:23 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
16/06/29 18:35:23 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/api,null}
16/06/29 18:35:23 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/,null}
16/06/29 18:35:23 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/static,null}
16/06/29 18:35:23 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}
16/06/29 18:35:23 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump,null}
16/06/29 18:35:23 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/json,null}
16/06/29 18:35:23 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors,null}
16/06/29 18:35:23 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment/json,null}
16/06/29 18:35:23 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment,null}
16/06/29 18:35:23 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd/json,null}
16/06/29 18:35:23 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd,null}
16/06/29 18:35:23 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/json,null}
16/06/29 18:35:23 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage,null}
16/06/29 18:35:23 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null}
16/06/29 18:35:23 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool,null}
16/06/29 18:35:23 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null}
16/06/29 18:35:23 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage,null}
16/06/29 18:35:23 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/json,null}
16/06/29 18:35:23 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages,null}
16/06/29 18:35:23 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null}
16/06/29 18:35:23 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job,null}
16/06/29 18:35:23 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/json,null}
16/06/29 18:35:23 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs,null}
16/06/29 18:35:23 INFO ui.SparkUI: Stopped Spark web UI at http://192.168.1.2:4041
16/06/29 18:35:23 INFO cluster.YarnClientSchedulerBackend: Stopped
16/06/29 18:35:23 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/06/29 18:35:23 INFO storage.MemoryStore: MemoryStore cleared
16/06/29 18:35:23 INFO storage.BlockManager: BlockManager stopped
16/06/29 18:35:23 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
16/06/29 18:35:23 WARN metrics.MetricsSystem: Stopping a MetricsSystem that is not running
16/06/29 18:35:23 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/06/29 18:35:23 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
16/06/29 18:35:23 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
16/06/29 18:35:23 INFO spark.SparkContext: Successfully stopped SparkContext
Exception in thread "main" java.lang.IllegalArgumentException: Required executor memory (1500+8704 MB) is above the max threshold (8192 MB) of this cluster! Please check the values of 'yarn.scheduler.maximum-allocation-mb' and/or 'yarn.nodemanager.resource.memory-mb'.
at org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:283)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:139)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:57)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
at org.apache.spark.SparkContext.(SparkContext.scala:530)
at org.apache.spark.api.java.JavaSparkContext.(JavaSparkContext.scala:59)
at BwaInterpreter.initInterpreter(BwaInterpreter.java:123)
at BwaInterpreter.(BwaInterpreter.java:94)
at SparkBWA.main(SparkBWA.java:25)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
16/06/29 18:35:23 INFO util.ShutdownHookManager: Shutdown hook called
16/06/29 18:35:23 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-1b8bda46-9801-4a09-9803-639fc43d964d/httpd-28fd6e5c-d9eb-4f67-80db-0b8ef0b9fd8d
16/06/29 18:35:23 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-1b8bda46-9801-4a09-9803-639fc43d964d
16/06/29 18:35:23 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down.

Problem in running MEM with Sparkbwa

This project is amazing, I think it will help a lot of people.
But, I stuck when I try to run the program.
I need to use MEM in my work, so I try to run that mem algorithm. Everything all went right before I got following message:
16/06/16 17:27:13 ERROR BwaInterpreter: java.io.FileNotFoundException: File file:/usr/local/hadoop/tmp/SparkBWA_ERR000589_1.filt.fastq-1-NoSort-local-1466068623812-0.sam does not exist
before that I think the program had already successfully finish all the BWA computing task. But it seems that the program did not generate any output.
I know that in MEM I should use an output redirection to get the result, which is different from ALN. Should there be anything wrong with that? And are there anything I can do to fix it?

-bwaArgs option seems not working

Hi

I have successfully ran sparkBWA using command below.

spark-submit --class SparkBWA --master yarn --driver-memory 1500m --executor-memory 1500m --executor-cores 1 --archives bwa.zip --verbose --num-executors 6 SparkBWA.jar -algorithm mem -reads paired -index ~/nfs_server/halvade_refs/ucsc.hg19.fasta -partitions 32 spark/NA12750/ERR000589_1.fastq spark/NA12750/ERR000589_2.fastq spark/NA12750_adam/output

But i wanted to add read group so i tried by giving -bwaArgs "-R "@rg\tID:foo\tLB:bar\tPL:illumina\tPU:illumina\tSM:ERR000589""

However, the output didn't contain the read groups.

I've tried with '-R "@rg\tID:foo\tLB:bar\tPL:illumina\tPU:illumina\tSM:ERR000589"'
"-R '@rg\tID:foo\tLB:bar\tPL:illumina\tPU:illumina\tSM:ERR000589'"
"-R @rg\tID:foo\tLB:bar\tPL:illumina\tPU:illumina\tSM:ERR000589"
but non of them worked.

Does sparkBWA support all the arguments in bwa?

Thanks!

Missing an output location for shuffle 0

hi all.
i’m still struggling with the sparkbwa.
For some background, i was trying to run the sparkbwa on my standalone spark cluster (1master + 1worker in one node with a flavor of ram200.disk10.eph1500.core40). however, right now it had been stuck at one stage:

collect at BwaInterpreter.java:305
org.apache.spark.api.java.AbstractJavaRDDLike.collect(JavaRDDLike.scala:45)
com.github.sparkbwa.BwaInterpreter.MapPairedBwa(BwaInterpreter.java:305)
com.github.sparkbwa.BwaInterpreter.runBwa(BwaInterpreter.java:334)
com.github.sparkbwa.SparkBWA.main(SparkBWA.java:37)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:755)
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

in this stage, it would take all the memory in the node no matter how much memory i pre-set in the command or spark-env.sh, and error out Missing an output location for shuffle 0 after ~10-20 min. the command i’m using is

spark-submit --deploy-mode client --driver-memory 50g --master spark://bioinfo-uni
corns-slurm-claire-worker-40:7077 --total-executor-cores 40 --executor-memory 10g --class com.github.sparkbwa.SparkBWA --verbose SparkBWA-0.2.jar -n 40 -m -r -p --index /mnt/refe
rence/GRCh38.d1.vd1.fa -w "-R @RG\tID:foo\tLB:bar\tPL:illumina\tPU:illumina\tSM:ERR000589" /user/ERR000589_1.filt.fastq /user/ERR000589_2.filt.fastq /user/Output_ERR000589

for driver-memory, i have to set it over 2G otherwise it would fail to java out of heap due to the fact that i’m running in standalone mode, which spark driver runs with spark client. (i'm not very confident about my understanding since i'm new to the spark.) the reason i increased it to 50G is because the application is failing at .collect stage, and from google search i got this:

Operations like .collect,.take and takeSample deliver data to the driver and hence, the driver needs enough memory to allocate such data.

https://stackoverflow.com/questions/27181737/how-to-deal-with-executor-memory-and-driver-memory-in-spark

for executor-memory, no matter how much i set, it would always take all the memory in the node at .collect stage as i mentioned.

i have tried different memory settings for both driver and executor memory, but no combination of the settings worked so far. (driver small value of memory and executor big, reversed, both small, both big, nothing works)

i also tried adding spark.shuffle.memoryFraction 0 in spark-default.conf, but it's not helping as well.

Please find the attachment which is the screen shot from the spark ui
screen shot 2017-11-30 at 1 48 32 pm

i also attached the screen shot before .collect failed, which you could see it used all the memory.
screen shot 2017-11-30 at 1 56 00 pm

i know it's kind of a spark question, and really sorry that i am unable to figure it out myself. however, i’m just wondering if you have some experience or have any insights of my situation. ty so much in advance.

Edit:

i also attached log files from spark master and worker. All the screen shots and logs are from the same single run, with the command:

/usr/bin/time -v spark-submit \
--deploy-mode client --driver-memory 2g --master spark://bioinfo-unicorns-slurm-claire-worker-40:7077 \
--total-executor-cores 32 --executor-memory 10g \
--class com.github.sparkbwa.SparkBWA \
--verbose SparkBWA-0.2.jar -n 32 -m -r -p --index /mnt/reference/GRCh38.d1.vd1.fa -w "-R @RG\tID:foo\tLB:bar\tPL:illumina\tPU:illumina\tSM:ERR000589" /user/ERR000589_1.filt.fastq /user/ERR000589_2.filt.fastq /user/Output_ERR000589

master_log.txt
worker_log.txt

jni_md.h file is not found

Hi,

Thanks for your work, I found it very useful 👌🏼

I'm a Mac user, when I ran:

mvn package

I got the following errors:

In file included from ./com_github_sparkbwa_BwaJni.h:2:
/Library/Java/JavaVirtualMachines/jdk1.8.0_144.jdk/Contents/Home/include/jni.h:45:10: fatal error: 'jni_md.h' file not found
#include "jni_md.h"
         ^~~~~~~~~~
1 error generated.
make: *** [sparkbwa] Error 1
[ERROR] Command execution failed.
org.apache.commons.exec.ExecuteException: Process exited with an error: 2 (Exit value: 2)
	at org.apache.commons.exec.DefaultExecutor.executeInternal(DefaultExecutor.java:404)
	at org.apache.commons.exec.DefaultExecutor.execute(DefaultExecutor.java:166)
	at org.codehaus.mojo.exec.ExecMojo.executeCommandLine(ExecMojo.java:764)
	at org.codehaus.mojo.exec.ExecMojo.executeCommandLine(ExecMojo.java:711)
	at org.codehaus.mojo.exec.ExecMojo.execute(ExecMojo.java:289)
	at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134)
	at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
	at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:154)
	at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:146)
	at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:117)
	at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:81)
	at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
	at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
	at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:309)
	at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:194)
	at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:107)
	at org.apache.maven.cli.MavenCli.execute(MavenCli.java:993)
	at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:345)
	at org.apache.maven.cli.MavenCli.main(MavenCli.java:191)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
	at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
	at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
	at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 5.649 s
[INFO] Finished at: 2018-01-10T21:49:46+03:00
[INFO] Final Memory: 15M/159M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.5.0:exec (makeBwa) on project SparkBWA: Command execution failed.: Process exited with an error: 2 (Exit value: 2) -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException

Could you please tell me why 'jni_md.h' file is not found ? and how can I solve this issue ? I tried a lot.

Thanks.

Problem when using spark-bwa (not linked to the reducer) ends up with bad header

Hi,
When using the reducer feature we end up having a bad header :

@SQ	SN:chr1	LN:249250621
@SQ	SN:chr2	LN:243199373
@SQ	SN:chr3	LN:198022430
@SQ	SN:chr4	LN:191154276
@SQ	SN:chr5	LN:180915260
@SQ	SN:chr6	LN:171115067
@SQ	SN:chr7	LN:159138663
@SQ	SN:chrX	LN:155270560
@SQ	SN:chr8	LN:146364022@SQ	SN:chr1	LN:249250621
@SQ	SN:chr2	LN:243199373

Collective sam file not present in the output directory.

The SparkBWA is running till the last step.But failing while running the –r option,meaning it is not collecting all the intermediate files to join into one sam file.
Here is the last run logs after which no sam file is created.Is there som change because of Merge pull request #37 from xubo26/master

17/02/21 16:14:48 INFO BwaInterpreter: JMAbuin:: SparkBWA :: Returned file ::/ testcases/outputsam//SparkBWA_LP2000265-DNA_A01_1.fastq-32-SortSpark-app-20170221155756-0000-29.sam
17/02/21 16:14:48 INFO BwaInterpreter: JMAbuin:: SparkBWA :: Returned file /testcases/outputsam//SparkBWA_LP2000265-DNA_A01_1.fastq-32-SortSpark-app-20170221155756-0000-30.sam
17/02/21 16:14:48 INFO BwaInterpreter: JMAbuin:: SparkBWA :: Returned file Sidra_lookup_wrappers/testcases/outputsam//SparkBWA_LP2000265-DNA_A01_1.fastq-32-SortSpark-app-20170221155756-0000-31.sam

Error during execution

Hi,
I am using below command lien to run this application:
$SPARK_HOME/bin/spark-submit --class SparkBWA --master yarn-client --driver-memory 1500m --executor-memory 1500m --executor-cores 1 --archives bwa.zip --verbose --num-executors 32 SparkBWA.jar -algorithm mem -reads paired -index /Data/HumanBase/ECOLI -partitions 32 ecoli_1.fq ecoli_2.fq Output_ECOLI

We have 7 node cluster with 6core each node and 16GB . Input file size is hyst 5MB.

Exception stack trace:
16/06/16 17:18:06 ERROR cluster.YarnScheduler: Lost executor 2 on client18.example.com: remote Akka client disassociated
16/06/16 17:18:06 WARN remote.ReliableDeliverySupervisor: Association with remote system [akka.tcp://[email protected]:58541] has failed, address is now gated for [5000] ms. Reason is: [Disassociated].
.
.
.
.
org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output location for shuffle 0

--master local[n] results in an error

If I specify --master local then things work as expected.

If, however, I specify --master local[*] or --master local[n] where n represents the number of cores to use then I get a failure.

Here's the command line:

spark-submit --class com.github.sparkbwa.SparkBWA --master local[32] ~/SparkBWA/target/SparkBWA-0.2.jar -n 32 -r -t sparkBwaTemp -i ~/gs/db/CriGri_1.0_gbff.fasta GStest_R1.fq GStest_R2.fq OutputSparkBWA.sam

I'm not sure exactly what goes wrong but I start seeing messages in the log such as Error saving stdout .

[Java_com_github_sparkbwa_BwaJni_bwa_1jni] option[0]: bwa.
[Java_com_github_sparkbwa_BwaJni_bwa_1jni] option[1]: mem.
[Java_com_github_sparkbwa_BwaJni_bwa_1jni] option[2]: /home/my_user_id/gs/db/CriGri_1.0_gbff.fasta.
[Java_com_github_sparkbwa_BwaJni_bwa_1jni] option[3]: sparkBwaTemp/local-1518020933380-RDD7_1.
[Java_com_github_sparkbwa_BwaJni_bwa_1jni] option[4]: sparkBwaTemp/local-1518020933380-RDD7_2.
[Java_com_github_sparkbwa_BwaJni_bwa_1jni] option[2]: /home/my_user_id/gs/db/CriGri_1.0_gbff.fasta.
[Java_com_github_sparkbwa_BwaJni_bwa_1jni] option[3]: sparkBwaTemp/local-1518020933380-RDD23_1.
[Java_com_github_sparkbwa_BwaJni_bwa_1jni] option[4]: sparkBwaTemp/local-1518020933380-RDD23_2.
[Java_com_github_sparkbwa_BwaJni_bwa_1jni] Error saving stdout.

There are other error messages at the end of the run:

[fclose] No such file or directory
[fclose] No such file or directory
======= Backtrace: =========
[fclose] No such file or directory
/lib64/libc.so.6(+0x7c619)[0x7f1f18783619]
/lib64/libc.so.6(fclose+0x155)[0x7f1f18771c15]
/tmp/libbwa2102141332556821063.so(err_fclose+0x9)[0x7f1cee9dc2b9]
/tmp/libbwa2102141332556821063.so(main+0x1a4)[0x7f1cee9953e4]
/tmp/libbwa2102141332556821063.so(Java_com_github_sparkbwa_BwaJni_bwa_1jni+0x2cc)[0x7f1cee99845c]
[0x7f1f01017774]
======= Memory map: ========
(nothing after this)

Has anyone else seen this?
Does anyone have an explanation?

Index file cannot be located

Hi,
Except in one small test case, all other attempts to run SparkBWA on a human genome fail with error:

[E::bwa_idx_load_from_disk] fail to locate the index files

I looked it up and seems that in some cases the BWA algorithm indeed does not accept the index file. These are the files I have (the location is visible to Spark, etc... ):

human37.dict
human37.fasta
human37.fasta.amb
human37.fasta.ann
human37.fasta.bwt
human37.fasta.fai
human37.fasta.pac
human37.fasta.sa

To create them I have used the standrad tools: link

I have tried all possible combinations of locations, names, running from data location, etc...

Any ideas?

Many thanks.

Build fails on linking libbwa (Mac OS X)

Hi,

I tried to build the project on OS X El Capitan, version 10.11.6

Do you have any suggestions what the cause may be or how I can find/debug the issue ?

In the console:

gcc -shared -o build/libbwa.so build/.o -lrt
ld: library not found for -lrt
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make: *
* [libbwa.so] Error 1

The build fails on the step in the Makefile:

libbwa.so: sparkbwa bwa
$(CC) $(LIBBWA_FLAGS) $(BUILD_DIR)/libbwa.so $(BUILD_DIR)/*.o $(LIBBWA_LIBS)

In the Makefile.commons I have changed the JAVA_HOME
JAVA_HOME = /Library/Java/JavaVirtualMachines/jdk1.8.0_40.jdk/Contents/Home
JAVA_HOME_INCLUDES = -I$(JAVA_HOME)/include -I$(JAVA_HOME)/include/darwin

but the bwa settings are not changed:

BWA_DIR = ./bwa
BWA = bwa-0.7.15
SPARKBWA_FLAGS = -c -g -Wall -Wno-unused-function -O2 -fPIC -DHAVE_PTHREAD -DUSE_MALLOC_WRAPPERS $(JAVA_HOME_INCLUDES)
LIBBWA_FLAGS = -shared -o
LIBBWA_LIBS = -lrt

Thanks,
Martijn

SparkBWA generates empty sam files

root@cancerdetector-m:/home/foehu_dna/SparkBWA/build# spark-submit --class SparkBWA --master yarn-client --driver-memory 1500m --executor-memory 1500m --executor-cores 1 --archives bwa.zip --verbose SparkBWA.jar -algorithm mem -reads paired -index /Data/HumanBase/hg38 -partitions 32 ERR000589_1.filt.fastq ERR000589_2.filt.fastq Output_ERR000589
Using properties file: /usr/lib/spark/conf/spark-defaults.conf
Adding default property: spark.executor.extraJavaOptions=-Djava.library.path=./bwa.zip
Adding default property: spark.history.fs.logDirectory=hdfs://cancerdetector-m/user/spark/eventlog
Adding default property: spark.eventLog.enabled=true
Adding default property: spark.driver.maxResultSize=1920m
Adding default property: spark.shuffle.service.enabled=true
Adding default property: spark.yarn.historyServer.address=cancerdetector-m:18080
Adding default property: spark.sql.parquet.cacheMetadata=false
Adding default property: spark.driver.memory=3840m
Adding default property: spark.dynamicAllocation.maxExecutors=10000
Adding default property: spark.scheduler.minRegisteredResourcesRatio=0.0
Adding default property: spark.yarn.am.memoryOverhead=384
Adding default property: spark.yarn.am.memory=2688m
Adding default property: spark.driver.extraJavaOptions=-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
Adding default property: spark.master=yarn-client
Adding default property: spark.executor.memory=2688m
Adding default property: spark.eventLog.dir=hdfs://cancerdetector-m/user/spark/eventlog
Adding default property: spark.dynamicAllocation.enabled=true
Adding default property: spark.executor.cores=1
Adding default property: spark.yarn.executor.memoryOverhead=384
Adding default property: spark.dynamicAllocation.minExecutors=1
Adding default property: spark.dynamicAllocation.initialExecutors=10000
Adding default property: spark.akka.frameSize=512
Parsed arguments:
  master                  yarn-client
  deployMode              null
  executorMemory          1500m
  executorCores           1
  totalExecutorCores      null
  propertiesFile          /usr/lib/spark/conf/spark-defaults.conf
  driverMemory            1500m
  driverCores             null
  driverExtraClassPath    null
  driverExtraLibraryPath  null
  driverExtraJavaOptions  -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
  supervise               false
  queue                   null
  numExecutors            null
  files                   null
  pyFiles                 null
  archives                file:/home/foehu_dna/SparkBWA/build/bwa.zip
  mainClass               SparkBWA
  primaryResource         file:/home/foehu_dna/SparkBWA/build/SparkBWA.jar
  name                    SparkBWA
  childArgs               [-algorithm mem -reads paired -index /Data/HumanBase/hg38 -partitions 32 ERR000589_1.filt.fastq ERR000589_2.filt.fastq Output_ERR000589]
  jars                    null
  packages                null
  packagesExclusions      null
  repositories            null
  verbose                 true

Spark properties used, including those specified through
 --conf and those from the properties file /usr/lib/spark/conf/spark-defaults.conf:
  spark.yarn.am.memoryOverhead -> 384
  spark.driver.memory -> 1500m
  spark.executor.memory -> 2688m
  spark.yarn.historyServer.address -> cancerdetector-m:18080
  spark.eventLog.enabled -> true
  spark.scheduler.minRegisteredResourcesRatio -> 0.0
  spark.dynamicAllocation.maxExecutors -> 10000
  spark.akka.frameSize -> 512
  spark.executor.extraJavaOptions -> -Djava.library.path=./bwa.zip
  spark.sql.parquet.cacheMetadata -> false
  spark.shuffle.service.enabled -> true
  spark.dynamicAllocation.initialExecutors -> 10000
  spark.dynamicAllocation.minExecutors -> 1
  spark.history.fs.logDirectory -> hdfs://cancerdetector-m/user/spark/eventlog
  spark.yarn.executor.memoryOverhead -> 384
  spark.driver.extraJavaOptions -> -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
  spark.eventLog.dir -> hdfs://cancerdetector-m/user/spark/eventlog
  spark.yarn.am.memory -> 2688m
  spark.driver.maxResultSize -> 1920m
  spark.master -> yarn-client
  spark.dynamicAllocation.enabled -> true
  spark.executor.cores -> 1


Main class:
SparkBWA
Arguments:
-algorithm
mem
-reads
paired
-index
/Data/HumanBase/hg38
-partitions
32
ERR000589_1.filt.fastq
ERR000589_2.filt.fastq
Output_ERR000589
System properties:
spark.yarn.am.memoryOverhead -> 384
spark.driver.memory -> 1500m
spark.executor.memory -> 1500m
spark.yarn.historyServer.address -> cancerdetector-m:18080
spark.eventLog.enabled -> true
SPARK_SUBMIT -> true
spark.scheduler.minRegisteredResourcesRatio -> 0.0
spark.dynamicAllocation.maxExecutors -> 10000
spark.akka.frameSize -> 512
spark.sql.parquet.cacheMetadata -> false
spark.executor.extraJavaOptions -> -Djava.library.path=./bwa.zip
spark.app.name -> SparkBWA
spark.shuffle.service.enabled -> true
spark.dynamicAllocation.initialExecutors -> 10000
spark.dynamicAllocation.minExecutors -> 1
spark.history.fs.logDirectory -> hdfs://cancerdetector-m/user/spark/eventlog
spark.yarn.executor.memoryOverhead -> 384
spark.driver.extraJavaOptions -> -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
spark.jars -> file:/home/foehu_dna/SparkBWA/build/SparkBWA.jar
spark.yarn.dist.archives -> file:/home/foehu_dna/SparkBWA/build/bwa.zip
spark.submit.deployMode -> client
spark.eventLog.dir -> hdfs://cancerdetector-m/user/spark/eventlog
spark.driver.maxResultSize -> 1920m
spark.yarn.am.memory -> 2688m
spark.master -> yarn-client
spark.dynamicAllocation.enabled -> true
spark.executor.cores -> 1
Classpath elements:
file:/home/foehu_dna/SparkBWA/build/SparkBWA.jar


16/08/05 00:58:41 INFO BwaOptions: JMAbuin:: Received argument: -algorithm
16/08/05 00:58:41 INFO BwaOptions: JMAbuin:: Received argument: mem
16/08/05 00:58:41 INFO BwaOptions: JMAbuin:: Received argument: -reads
16/08/05 00:58:41 INFO BwaOptions: JMAbuin:: Received argument: paired
16/08/05 00:58:41 INFO BwaOptions: JMAbuin:: Received argument: -index
16/08/05 00:58:41 INFO BwaOptions: JMAbuin:: Received argument: /Data/HumanBase/hg38
16/08/05 00:58:41 INFO BwaOptions: JMAbuin:: Received argument: -partitions
16/08/05 00:58:41 INFO BwaOptions: JMAbuin:: Received argument: 32
16/08/05 00:58:41 INFO BwaOptions: JMAbuin:: Received argument: ERR000589_1.filt.fastq
16/08/05 00:58:41 INFO BwaOptions: JMAbuin:: Received argument: ERR000589_2.filt.fastq
16/08/05 00:58:41 INFO BwaOptions: JMAbuin:: Received argument: Output_ERR000589
16/08/05 00:58:42 INFO akka.event.slf4j.Slf4jLogger: Slf4jLogger started
16/08/05 00:58:42 INFO Remoting: Starting remoting
16/08/05 00:58:42 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:54448]
16/08/05 00:58:42 INFO org.spark-project.jetty.server.Server: jetty-8.y.z-SNAPSHOT
16/08/05 00:58:42 INFO org.spark-project.jetty.server.AbstractConnector: Started [email protected]:4040
16/08/05 00:58:42 INFO org.spark-project.jetty.server.Server: jetty-8.y.z-SNAPSHOT
16/08/05 00:58:42 INFO org.spark-project.jetty.server.AbstractConnector: Started [email protected]:32772
16/08/05 00:58:43 INFO org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at cancerdetector-m/10.132.0.4:8032
16/08/05 00:58:44 INFO org.apache.hadoop.yarn.client.api.impl.YarnClientImpl: Submitted application application_1470330263545_0007
16/08/05 00:58:48 INFO BwaInterpreter: JMAbuin:: Starting BWA
16/08/05 00:58:48 INFO BwaInterpreter: JMAbuin::Not sorting in HDFS. Timing: 28482317228669
16/08/05 00:58:48 INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat: Total input paths to process : 1
16/08/05 00:58:48 INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat: Total input paths to process : 1
16/08/05 00:58:48 INFO BwaInterpreter: JMAbuin:: No sort with partitioning
16/08/05 00:58:48 INFO BwaInterpreter: JMAbuin:: Repartition with no sort
16/08/05 00:58:48 INFO BwaInterpreter: JMAbuin:: End of sorting. Timing: 28482908686155
16/08/05 00:58:48 INFO BwaInterpreter: JMAbuin:: Total time: 0.009857624766666667 minutes
16/08/05 00:58:48 INFO BwaAlignmentBase: JMAbuin:: application_1470330263545_0007 - SparkBWA_ERR000589_1.filt.fastq-32-NoSort
16/08/05 01:01:31 INFO BwaInterpreter: BwaRDD :: Total of returned lines from RDDs :: 32
16/08/05 01:01:31 INFO BwaInterpreter: JMAbuin:: SparkBWA:: Returned file ::Output_ERR000589/SparkBWA_ERR000589_1.filt.fastq-32-NoSort-application_1470330263545_0007-0.sam
16/08/05 01:01:31 INFO BwaInterpreter: JMAbuin:: SparkBWA:: Returned file ::Output_ERR000589/SparkBWA_ERR000589_1.filt.fastq-32-NoSort-application_1470330263545_0007-1.sam
16/08/05 01:01:31 INFO BwaInterpreter: JMAbuin:: SparkBWA:: Returned file ::Output_ERR000589/SparkBWA_ERR000589_1.filt.fastq-32-NoSort-application_1470330263545_0007-2.sam
16/08/05 01:01:31 INFO BwaInterpreter: JMAbuin:: SparkBWA:: Returned file ::Output_ERR000589/SparkBWA_ERR000589_1.filt.fastq-32-NoSort-application_1470330263545_0007-3.sam
16/08/05 01:01:31 INFO BwaInterpreter: JMAbuin:: SparkBWA:: Returned file ::Output_ERR000589/SparkBWA_ERR000589_1.filt.fastq-32-NoSort-application_1470330263545_0007-4.sam
16/08/05 01:01:31 INFO BwaInterpreter: JMAbuin:: SparkBWA:: Returned file ::Output_ERR000589/SparkBWA_ERR000589_1.filt.fastq-32-NoSort-application_1470330263545_0007-5.sam
16/08/05 01:01:31 INFO BwaInterpreter: JMAbuin:: SparkBWA:: Returned file ::Output_ERR000589/SparkBWA_ERR000589_1.filt.fastq-32-NoSort-application_1470330263545_0007-6.sam
16/08/05 01:01:31 INFO BwaInterpreter: JMAbuin:: SparkBWA:: Returned file ::Output_ERR000589/SparkBWA_ERR000589_1.filt.fastq-32-NoSort-application_1470330263545_0007-7.sam
16/08/05 01:01:31 INFO BwaInterpreter: JMAbuin:: SparkBWA:: Returned file ::Output_ERR000589/SparkBWA_ERR000589_1.filt.fastq-32-NoSort-application_1470330263545_0007-8.sam
16/08/05 01:01:31 INFO BwaInterpreter: JMAbuin:: SparkBWA:: Returned file ::Output_ERR000589/SparkBWA_ERR000589_1.filt.fastq-32-NoSort-application_1470330263545_0007-9.sam
16/08/05 01:01:31 INFO BwaInterpreter: JMAbuin:: SparkBWA:: Returned file ::Output_ERR000589/SparkBWA_ERR000589_1.filt.fastq-32-NoSort-application_1470330263545_0007-10.sam
16/08/05 01:01:31 INFO BwaInterpreter: JMAbuin:: SparkBWA:: Returned file ::Output_ERR000589/SparkBWA_ERR000589_1.filt.fastq-32-NoSort-application_1470330263545_0007-11.sam
16/08/05 01:01:31 INFO BwaInterpreter: JMAbuin:: SparkBWA:: Returned file ::Output_ERR000589/SparkBWA_ERR000589_1.filt.fastq-32-NoSort-application_1470330263545_0007-12.sam
16/08/05 01:01:31 INFO BwaInterpreter: JMAbuin:: SparkBWA:: Returned file ::Output_ERR000589/SparkBWA_ERR000589_1.filt.fastq-32-NoSort-application_1470330263545_0007-13.sam
16/08/05 01:01:31 INFO BwaInterpreter: JMAbuin:: SparkBWA:: Returned file ::Output_ERR000589/SparkBWA_ERR000589_1.filt.fastq-32-NoSort-application_1470330263545_0007-14.sam
16/08/05 01:01:31 INFO BwaInterpreter: JMAbuin:: SparkBWA:: Returned file ::Output_ERR000589/SparkBWA_ERR000589_1.filt.fastq-32-NoSort-application_1470330263545_0007-15.sam
16/08/05 01:01:31 INFO BwaInterpreter: JMAbuin:: SparkBWA:: Returned file ::Output_ERR000589/SparkBWA_ERR000589_1.filt.fastq-32-NoSort-application_1470330263545_0007-16.sam
16/08/05 01:01:31 INFO BwaInterpreter: JMAbuin:: SparkBWA:: Returned file ::Output_ERR000589/SparkBWA_ERR000589_1.filt.fastq-32-NoSort-application_1470330263545_0007-17.sam
16/08/05 01:01:31 INFO BwaInterpreter: JMAbuin:: SparkBWA:: Returned file ::Output_ERR000589/SparkBWA_ERR000589_1.filt.fastq-32-NoSort-application_1470330263545_0007-18.sam
16/08/05 01:01:31 INFO BwaInterpreter: JMAbuin:: SparkBWA:: Returned file ::Output_ERR000589/SparkBWA_ERR000589_1.filt.fastq-32-NoSort-application_1470330263545_0007-19.sam
16/08/05 01:01:31 INFO BwaInterpreter: JMAbuin:: SparkBWA:: Returned file ::Output_ERR000589/SparkBWA_ERR000589_1.filt.fastq-32-NoSort-application_1470330263545_0007-20.sam
16/08/05 01:01:31 INFO BwaInterpreter: JMAbuin:: SparkBWA:: Returned file ::Output_ERR000589/SparkBWA_ERR000589_1.filt.fastq-32-NoSort-application_1470330263545_0007-21.sam
16/08/05 01:01:31 INFO BwaInterpreter: JMAbuin:: SparkBWA:: Returned file ::Output_ERR000589/SparkBWA_ERR000589_1.filt.fastq-32-NoSort-application_1470330263545_0007-22.sam
16/08/05 01:01:31 INFO BwaInterpreter: JMAbuin:: SparkBWA:: Returned file ::Output_ERR000589/SparkBWA_ERR000589_1.filt.fastq-32-NoSort-application_1470330263545_0007-23.sam
16/08/05 01:01:31 INFO BwaInterpreter: JMAbuin:: SparkBWA:: Returned file ::Output_ERR000589/SparkBWA_ERR000589_1.filt.fastq-32-NoSort-application_1470330263545_0007-24.sam
16/08/05 01:01:31 INFO BwaInterpreter: JMAbuin:: SparkBWA:: Returned file ::Output_ERR000589/SparkBWA_ERR000589_1.filt.fastq-32-NoSort-application_1470330263545_0007-25.sam
16/08/05 01:01:31 INFO BwaInterpreter: JMAbuin:: SparkBWA:: Returned file ::Output_ERR000589/SparkBWA_ERR000589_1.filt.fastq-32-NoSort-application_1470330263545_0007-26.sam
16/08/05 01:01:31 INFO BwaInterpreter: JMAbuin:: SparkBWA:: Returned file ::Output_ERR000589/SparkBWA_ERR000589_1.filt.fastq-32-NoSort-application_1470330263545_0007-27.sam
16/08/05 01:01:31 INFO BwaInterpreter: JMAbuin:: SparkBWA:: Returned file ::Output_ERR000589/SparkBWA_ERR000589_1.filt.fastq-32-NoSort-application_1470330263545_0007-28.sam
16/08/05 01:01:31 INFO BwaInterpreter: JMAbuin:: SparkBWA:: Returned file ::Output_ERR000589/SparkBWA_ERR000589_1.filt.fastq-32-NoSort-application_1470330263545_0007-29.sam
16/08/05 01:01:31 INFO BwaInterpreter: JMAbuin:: SparkBWA:: Returned file ::Output_ERR000589/SparkBWA_ERR000589_1.filt.fastq-32-NoSort-application_1470330263545_0007-30.sam
16/08/05 01:01:31 INFO BwaInterpreter: JMAbuin:: SparkBWA:: Returned file ::Output_ERR000589/SparkBWA_ERR000589_1.filt.fastq-32-NoSort-application_1470330263545_0007-31.sam

ERROR scheduler.LiveListenerBus: Listener EventLoggingListener threw an exception java.lang.reflect.InvocationTargetException

16/07/25 15:45:17 INFO yarn.Client: Application report for application_1469432674658_0001 (state: ACCEPTED)
16/07/25 15:45:18 INFO yarn.Client: Application report for application_1469432674658_0001 (state: ACCEPTED)
16/07/25 15:45:19 INFO yarn.Client: Application report for application_1469432674658_0001 (state: ACCEPTED)
16/07/25 15:45:20 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(null)
16/07/25 15:45:20 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> master, PROXY_URI_BASES -> http://master:8088/proxy/application_1469432674658_0001), /proxy/application_1469432674658_0001
16/07/25 15:45:20 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
16/07/25 15:45:20 INFO yarn.Client: Application report for application_1469432674658_0001 (state: RUNNING)
16/07/25 15:45:20 INFO yarn.Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: 192.168.100.3
ApplicationMaster RPC port: 0
queue: default
start time: 1469432715822
final status: UNDEFINED
tracking URL: http://master:8088/proxy/application_1469432674658_0001/
user: root
16/07/25 15:45:20 INFO cluster.YarnClientSchedulerBackend: Application application_1469432674658_0001 has started running.
16/07/25 15:45:20 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 52523.
16/07/25 15:45:20 INFO netty.NettyBlockTransferService: Server created on 52523
16/07/25 15:45:20 INFO storage.BlockManagerMaster: Trying to register BlockManager
16/07/25 15:45:20 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.100.6:52523 with 853.1 MB RAM, BlockManagerId(driver, 192.168.100.6, 52523)
16/07/25 15:45:20 INFO storage.BlockManagerMaster: Registered BlockManager
16/07/25 15:45:21 INFO scheduler.EventLoggingListener: Logging events to hdfs://master:9000/sparklog/application_1469432674658_0001
16/07/25 15:45:25 INFO cluster.YarnClientSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (slave3:53638) with ID 1
16/07/25 15:45:25 INFO cluster.YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8
16/07/25 15:45:25 INFO storage.BlockManagerMasterEndpoint: Registering block manager slave3:34301 with 853.1 MB RAM, BlockManagerId(1, slave3, 34301)
16/07/25 15:45:25 INFO BwaInterpreter: JMAbuin:: Starting BWA
16/07/25 15:45:25 INFO BwaInterpreter: JMAbuin::Not sorting in HDFS. Timing: 142090691324
16/07/25 15:45:25 INFO storage.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 213.2 KB, free 213.2 KB)
16/07/25 15:45:25 INFO storage.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 19.8 KB, free 233.0 KB)
16/07/25 15:45:25 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.100.6:52523 (size: 19.8 KB, free: 853.1 MB)
16/07/25 15:45:25 INFO spark.SparkContext: Created broadcast 0 from newAPIHadoopFile at BwaInterpreter.java:246
16/07/25 15:45:25 INFO storage.MemoryStore: Block broadcast_1 stored as values in memory (estimated size 213.2 KB, free 446.2 KB)
16/07/25 15:45:25 INFO storage.MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 19.8 KB, free 466.1 KB)
16/07/25 15:45:25 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.100.6:52523 (size: 19.8 KB, free: 853.1 MB)
16/07/25 15:45:25 INFO spark.SparkContext: Created broadcast 1 from newAPIHadoopFile at BwaInterpreter.java:247
16/07/25 15:45:25 INFO input.FileInputFormat: Total input paths to process : 1
16/07/25 15:45:25 INFO input.FileInputFormat: Total input paths to process : 1
16/07/25 15:45:25 INFO rdd.NewHadoopRDD: Removing RDD 0 from persistence list
16/07/25 15:45:25 INFO storage.BlockManager: Removing RDD 0
16/07/25 15:45:25 INFO rdd.NewHadoopRDD: Removing RDD 1 from persistence list
16/07/25 15:45:25 INFO storage.BlockManager: Removing RDD 1
16/07/25 15:45:25 ERROR scheduler.LiveListenerBus: Listener EventLoggingListener threw an exception
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:150)
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:150)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.EventLoggingListener.logEvent(EventLoggingListener.scala:150)
at org.apache.spark.scheduler.EventLoggingListener.onUnpersistRDD(EventLoggingListener.scala:186)
at org.apache.spark.scheduler.SparkListenerBus$class.onPostEvent(SparkListenerBus.scala:50)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:55)
at org.apache.spark.util.AsynchronousListenerBus.postToAll(AsynchronousListenerBus.scala:37)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(AsynchronousListenerBus.scala:80)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:65)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:65)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(AsynchronousListenerBus.scala:64)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1180)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1.run(AsynchronousListenerBus.scala:63)
Caused by: java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:795)
at org.apache.hadoop.hdfs.DFSOutputStream.flushOrSync(DFSOutputStream.java:1985)
at org.apache.hadoop.hdfs.DFSOutputStream.hflush(DFSOutputStream.java:1946)
at org.apache.hadoop.fs.FSDataOutputStream.hflush(FSDataOutputStream.java:130)
... 21 more
16/07/25 15:45:25 INFO BwaInterpreter: JMAbuin:: No sort with partitioning
16/07/25 15:45:25 ERROR scheduler.LiveListenerBus: Listener EventLoggingListener threw an exception
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:150)
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:150)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.EventLoggingListener.logEvent(EventLoggingListener.scala:150)
at org.apache.spark.scheduler.EventLoggingListener.onUnpersistRDD(EventLoggingListener.scala:186)
at org.apache.spark.scheduler.SparkListenerBus$class.onPostEvent(SparkListenerBus.scala:50)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:55)
at org.apache.spark.util.AsynchronousListenerBus.postToAll(AsynchronousListenerBus.scala:37)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(AsynchronousListenerBus.scala:80)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:65)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:65)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(AsynchronousListenerBus.scala:64)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1180)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1.run(AsynchronousListenerBus.scala:63)
Caused by: java.io.IOException: Filesystem closed

[Question - RDD creation step , BWA-MEM step] WIth alignment input size 280GB

Hi,

Thank you for your great work.

I am currently using your tool and have successfully executed the example ERR000589 which is 3.4GB in size.

After trying ERR000589, I am now trying to do an paired alignment of input data size of 280GB.
(Input is given in .gz file)

I found RDD creation step is taking about 5 hours.
I checked the log and found below information.

  1. Log message in executor Thread 117 spilling in-memory map of 5.2 GB to disk(97 times so far)
  2. tmp/hadoop/ folder size in the middle of the operation, was increasing over 70 GB.
    Which I suspect that the gzip decompressed intermediate data is written to disk.

On your experience is it correct to say that the reason RDD creation step taking long is because of big input size which makes the intermediate data to be written in disk?
(In the paper D3 dataset which is 48GB takes less than 10minute to create RDD)

Also, I found BWA-MEM which is stage 3 of SparkBWA is failing for big input 280GB. But when I increased the executor memory size to 100GB it works without failing.
I am suspecting failure is caused by the big input size, is it correct? (used 32 partition with 10~50GB executor memory size)
Will increasing the partition number lower the memory required for each executor?

Thank you!

Output is empty .sam file!!

Hi,
I am using this command -

/opt/cloudera/parcels/CDH-5.10.1-1.cdh5.10.1.p0.10/lib/spark/bin/spark-submit --class com.github.sparkbwa.SparkBWA --master yarn-cluster --driver-memory 10g --executor-memory 10g --executor-cores 1 --verbose --num-executors 32 /home/rokshan.jahan/adamproject/SparkBWA-master/target/SparkBWA-0.2.jar -m -r -s --index /home/rokshan.jahan/adamproject/reference/ref/Homo_sapiens_assembly.fasta -w "-R @rg\tID:SRR1517974\tLB:SRR1517974\tPL:illumina\tPU:illumina\tSM:SRR1517974" hdfs://ip-10-48-3-5.ips.local:8020/user/rokshan.jahan/data/fastqdata/SRR1517974.fastq hdfs://ip-10-48-3-5.ips.local:8020/user/rokshan.jahan/data/SRR1517974.sam

Index file I am using :
Homo_sapiens_assembly.fasta.sa
Homo_sapiens_assembly.fasta.pac
Homo_sapiens_assembly.fasta.fai
Homo_sapiens_assembly.fasta.bwt
Homo_sapiens_assembly.fasta.ann
Homo_sapiens_assembly.fasta.amb

I am not getting any error, but output is getting empty sam file-- SRR1517974.sam

Looks like its not reading the index files. How should I give the path, so that it can read the index file?

Can anyone please help me with this. Any suggestion will be really helpful!

Thanks

Spark job fails saying methods cannot be called on a stopped SparkContext

I am receiving an error when attempting to execute a spark job. Spark is running so I'm not sure where the issue is coming from. Any clue?

Exception in thread "main" java.lang.IllegalStateException: Cannot call methods on a stopped SparkContext.
This stopped SparkContext was created at:

org.apache.spark.SparkContext.(SparkContext.scala:82)
com.netreveal.sparkprofiler.detection.DetectionEngine$.main(DetectionEngine.scala:51)
com.netreveal.sparkprofiler.detection.DetectionEngine.main(DetectionEngine.scala)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:750)
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

The currently active SparkContext was created at:

(No active SparkContext.)

    at org.apache.spark.SparkContext.org$apache$spark$SparkContext$$assertNotStopped(SparkContext.scala:106)
    at org.apache.spark.SparkContext.getSchedulingMode(SparkContext.scala:1602)
    at org.apache.spark.SparkContext.postEnvironmentUpdate(SparkContext.scala:2178)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:579)
    at com.netreveal.sparkprofiler.detection.DetectionEngine$.main(DetectionEngine.scala:51)
    at com.netreveal.sparkprofiler.detection.DetectionEngine.main(DetectionEngine.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:750)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

Mon Jan 27 14:25:26 UTC 2020

about processing SparkBWA warning

Hi~ I process the example for SparkBWA. There are some WARNING on the message and then didn't find any log4j.xml on SparkBWA. I just guess maybe these warning I couldn't get results. Do you have any ideas?

For example:
spark-submit --class com.github.sparkbwa.SparkBWA --master local[2] --driver-memory 1500m --executor-memory 10g --executor-cores 1 --verbose --num-executors 32 SparkBWA-0.2.jar -m -r -p --index hg38/hg38.fasta -p 32 -w "-R @rg\tID:foo\tLB:bar\tPL:illumina\tPU:illumina\tSM:ERR000589" ERR000589_1.filt.fastq ERR000589_2.filt.fastq Output_ERR000589

log4j:WARN No appenders could be found for logger (com.github.sparkbwa.BwaOptions).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

Many Thanks, Hannah Lin

Wouldn't it be good to add a feature in your program to execute the BWA program directly?

I tried to compile and use SparkBWA, but was having a lot of problems with that JNI thing. Since, I wanted to test it with BWA mem, I changed the run function, so that instead of using JNI, I execute the bwa mem program directly. The BWA excutable is sent to the worker nodes with the --files option of spark-submit. I used ProcessBuilder with redirectOutput to run the BWA program and save its output to a file.

I have two questions. Firstly, is executing bwa mem through ProcessBuilder slower than using JNI to call it? Secondly, what are other advantages of using JNI instead of directly running the BWA executable? I am showing my modifications below.

I would suggest that you should add an option in your program to run the BWA executable directly.

public int run(int alnStep) 
{
	// Get the list of arguments passed by the user
	String[] parametersArray = parseParameters(alnStep);
        String[] y = parametersArray[parametersArray.length-1].split("RDD");
	String y1 = y[y.length-1];
	String id = y1.split("_")[0];
			
	// Call to JNI with the selected parameters
	/*int returnCode = BwaJni.Bwa_Jni(parametersArray);

	if (returnCode != 0) {
		LOG.error("["+this.getClass().getName()+
"] :: BWA exited with error code: " + String.valueOf(returnCode));
		log("run/" + id, "["+this.getClass().getName()+
"] :: BWA exited with error code: " + String.valueOf(returnCode));
		return returnCode;
	}*/
	try
	{
                String runParams[] = {"./bwa", "mem", "-t", "-4", 
parametersArray[6], parametersArray[7], parametersArray[8]};
	        
		Process proc = new ProcessBuilder(runParams).redirectOutput(
new File(parametersArray[5])).start();
		proc.waitFor();
		log("run/" + id, "bwa sucessfully finished!");
		return 0;
	}
	catch (Exception e) 
	{
		log("run/" + id, "Error in processing this chunk! " + e.getMessage());
		return 1;
	}
}

Error when using Azure blob storage paths for FASTQ files and SAM output file

I was trying to run SparkBWA on Azure's HDInsight with blob storage as the HDFS.

My two fastQ files and the jar file are on blob.

I specify spark-submit as follows:

spark-submit --class com.github.sparkbwa.SparkBWA --master yarn-cluster --verbose wasb://[email protected]/folder/SparkBWA-0.2.jar -a mem -p -w "-R @RG\tID:foo\tLB:bar\tPL:illumina\tPU:illumina\tSM:ERR000589" -i wasb://[email protected]/folder/hg38.fa wasb://[email protected]/folder/LP6005083-DNA_B03-read1.fastq wasb://[email protected]/folder/LP6005083-DNA_B03-read2.fastq wasb://[email protected]/folder/Output_LP6005083-DNA_B03
My cluster is has 3 worker nodes, each with 16 cores and 112GB of memory.

When I submit this job, the program fails by declaring that no input and output have been specified. Obviously, I have. It is just that they are proper AZURE blob paths.

Below is the error messages along with some context. Do I need to specify the blob storage paths differently? Does this work with public cloud clusters using AWS S3 or Azure Blob?

17/04/10 15:43:48 INFO BwaOptions: [com.github.sparkbwa.BwaOptions] :: Received argument: -a
17/04/10 15:43:48 INFO BwaOptions: [com.github.sparkbwa.BwaOptions] :: Received argument: mem
17/04/10 15:43:48 INFO BwaOptions: [com.github.sparkbwa.BwaOptions] :: Received argument: -p
17/04/10 15:43:48 INFO BwaOptions: [com.github.sparkbwa.BwaOptions] :: Received argument: -w
17/04/10 15:43:48 INFO BwaOptions: [com.github.sparkbwa.BwaOptions] :: Received argument: -R @RG\tID:foo\tLB:bar\tPL:illumina\tPU:illumina\tSM:ERR000589
17/04/10 15:43:48 INFO BwaOptions: [com.github.sparkbwa.BwaOptions] :: Received argument: -i
17/04/10 15:43:48 INFO BwaOptions: [com.github.sparkbwa.BwaOptions] :: Received argument: wasb://[email protected]/folder/hg38.fa
17/04/10 15:43:48 INFO BwaOptions: [com.github.sparkbwa.BwaOptions] :: Received argument: wasb://[email protected]/folder/LP6005083-DNA_B03-read1.fastq
17/04/10 15:43:48 INFO BwaOptions: [com.github.sparkbwa.BwaOptions] :: Received argument: wasb://[email protected]/folder/LP6005083-DNA_B03-read2.fastq
17/04/10 15:43:48 INFO BwaOptions: [com.github.sparkbwa.BwaOptions] :: Received argument: wasb://[email protected]/folder/Output_LP6005083-DNA_B03
17/04/10 15:43:48 ERROR BwaOptions: [com.github.sparkbwa.BwaOptions] No input and output has been found. Aborting.

Error SparkBWA

Hello,
Today I tried to to tun this application with a fresh new Spark-Hadoop cluster but still no luck:

Some logs snippet:

YARN executor launch context:
env:
CLASSPATH -> {{PWD}}{{PWD}}/spark.jar$HADOOP_CONF_DIR$HADOOP_COMMON_HOME/share/hadoop/common/$HADOOP_COMMON_HOME/share/hadoop/common/lib/$HADOOP_HDFS_HOME/share/hadoop/hdfs/$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/$HADOOP_YARN_HOME/share/hadoop/yarn/$HADOOP_YARN_HOME/share/hadoop/yarn/lib/$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/
SPARK_YARN_CACHE_ARCHIVES -> hdfs://master:9000/user/hadoop/.sparkStaging/application_1466685575616_0002/bwa.zip#bwa.zip
SPARK_LOG_URL_STDERR -> http://client15.example.com:8042/node/containerlogs/container_1466685575616_0002_01_000002/hadoop/stderr?start=-4096
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 187698038
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1466685575616_0002
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE
SPARK_YARN_CACHE_ARCHIVES_FILE_SIZES -> 1001391
SPARK_USER -> hadoop
SPARK_YARN_CACHE_ARCHIVES_TIME_STAMPS -> 1466686067853
SPARK_YARN_MODE -> true
SPARK_HOME -> /opt/spark/spark-1.6.1-bin-hadoop2.6
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1466686067186
SPARK_LOG_URL_STDOUT -> http://client15.example.com:8042/node/containerlogs/container_1466685575616_0002_01_000002/hadoop/stdout?start=-4096
SPARK_YARN_CACHE_ARCHIVES_VISIBILITIES -> PRIVATE
SPARK_YARN_CACHE_FILES -> hdfs://master:9000/user/hadoop/.sparkStaging/application_1466685575616_0002/spark-assembly-1.6.1-hadoop2.6.0.jar#spark.jar

command formed in application logs :

{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms1500m -Xmx1500m **'-Djava.library.path=/home/hadoop/LS_Tools/SparkBWA/build/bwa.zip**' -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=48908' -Dspark.yarn.app.container.log.dir=<LOG_DIR> -XX:MaxPermSize=256m org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url spark://[email protected]:48908 --executor-id 1 --hostname client16.example.com --cores 4 --app-id application_1466685575616_0002 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr

Stack Trace Error:

16/06/23 18:18:36 ERROR util.SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[Executor task launch worker-2,5,main]
java.lang.NoClassDefFoundError: Could not initialize class BwaJni
at Bwa.run(Bwa.java:443)
at BwaRDD$BwaAlignment.call(BwaRDD.java:288)
at BwaRDD$BwaAlignment.call(BwaRDD.java:173)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$mapPartitionsWithIndex$1.apply(JavaRDDLike.scala:105)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$mapPartitionsWithIndex$1.apply(JavaRDDLike.scala:105)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$22.apply(RDD.scala:745)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$22.apply(RDD.scala:745)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
16/06/23 18:18:36 ERROR util.SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[Executor task launch worker-3,5,main]
java.lang.NoClassDefFoundError: Could not initialize class BwaJni
at Bwa.run(Bwa.java:443)
at BwaRDD$BwaAlignment.call(BwaRDD.java:288)
at BwaRDD$BwaAlignment.call(BwaRDD.java:173)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$mapPartitionsWithIndex$1.apply(JavaRDDLike.scala:105)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$mapPartitionsWithIndex$1.apply(JavaRDDLike.scala:105)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$22.apply(RDD.scala:745)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$22.apply(RDD.scala:745)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
16/06/23 18:18:36 INFO storage.DiskBlockManager: Shutdown hook called
16/06/23 18:18:36 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 36
16/06/23 18:18:36 INFO executor.Executor: Running task 4.0 in stage 3.2 (TID 36)
16/06/23 18:18:36 INFO util.ShutdownHookManager: Shutdown hook called
16/06/23 18:18:36 INFO util.ShutdownHookManager: Deleting directory /home/hadoop/hadoop_tempdir/nm-local-dir/usercache/hadoop/appcache/application_1466685575616_0002/spark-c222af54-6cc2-46f8-b777-8e17e73bef3b
16/06/23 18:18:36 INFO executor.CoarseGrainedExecutorBackend: Got assigned task 37

LogType:stdout
Log Upload Time:23-Jun-2016 18:18:57
LogLength:0
Log Contents:

Command used to run:

$SPARK_HOME/bin/spark-submit --class SparkBWA --master yarn-client --driver-memory 1500m --executor-memory 1500m --executor-cores 4 --archives bwa.zip --verbose --num-executors 1 SparkBWA.jar -algorithm mem -reads paired -index /Data/HumanBase/ECOLI -partitions 10 ecoli_2.fq ecoli_3.fq ecoli_23_8

run with single-end error

shell:

spark-submit --class SparkBWA \
--master yarn-client \
--archives bwa.zip \
--verbose \
SparkBWA.jar \
-algorithm mem -reads single \
-index /home/hadoop/xubo/ref/GRCH38L1Index/GRCH38chr1L3556522.fasta \
-sorting hdfs \
-partitions 3 \
/xubo/alignment/sparkBWA/GRCH38chr1L3556522N10000SE.fastq \
/xubo/alignment/output/sparkBWA/datatest11se

error:

hadoop@Master:~/xubo/project/alignment/sparkBWA$ ./seGRCH38L1.sh 
Using properties file: /home/hadoop/cloud/spark-1.5.2/conf/spark-defaults.conf
Adding default property: spark.executor.extraJavaOptions=-Djava.library.path=/home/hadoop/xubo/tools/SparkBWA/build/bwa.zip
Adding default property: spark.eventLog.enabled=true
Adding default property: spark.eventLog.dir=file:///home/hadoop/Downloads/hangc/sparklog
Adding default property: spark.eventLog.compress=true
Adding default property: spark.yarn.executor.memoryOverhead=3704
Parsed arguments:
  master                  yarn-client
  deployMode              null
  executorMemory          4G
  executorCores           null
  totalExecutorCores      null
  propertiesFile          /home/hadoop/cloud/spark-1.5.2/conf/spark-defaults.conf
  driverMemory            2G
  driverCores             null
  driverExtraClassPath    null
  driverExtraLibraryPath  null
  driverExtraJavaOptions  null
  supervise               false
  queue                   null
  numExecutors            null
  files                   null
  pyFiles                 null
  archives                file:/home/hadoop/xubo/project/alignment/sparkBWA/bwa.zip
  mainClass               SparkBWA
  primaryResource         file:/home/hadoop/xubo/project/alignment/sparkBWA/SparkBWA.jar
  name                    SparkBWA
  childArgs               [-algorithm mem -reads single -index /home/hadoop/xubo/ref/GRCH38L1Index/GRCH38chr1L3556522.fasta -sorting hdfs -partitions 3 /xubo/alignment/sparkBWA/GRCH38chr1L3556522N10000SE.fastq /xubo/alignment/output/sparkBWA/datatest11se]
  jars                    null
  packages                null
  packagesExclusions      null
  repositories            null
  verbose                 true

Spark properties used, including those specified through
 --conf and those from the properties file /home/hadoop/cloud/spark-1.5.2/conf/spark-defaults.conf:
  spark.eventLog.enabled -> true
  spark.eventLog.compress -> true
  spark.executor.extraJavaOptions -> -Djava.library.path=/home/hadoop/xubo/tools/SparkBWA/build/bwa.zip
  spark.yarn.executor.memoryOverhead -> 3704
  spark.eventLog.dir -> file:///home/hadoop/Downloads/hangc/sparklog


Main class:
SparkBWA
Arguments:
-algorithm
mem
-reads
single
-index
/home/hadoop/xubo/ref/GRCH38L1Index/GRCH38chr1L3556522.fasta
-sorting
hdfs
-partitions
3
/xubo/alignment/sparkBWA/GRCH38chr1L3556522N10000SE.fastq
/xubo/alignment/output/sparkBWA/datatest11se
System properties:
spark.driver.memory -> 2G
spark.executor.memory -> 4G
spark.eventLog.enabled -> true
spark.eventLog.compress -> true
SPARK_SUBMIT -> true
spark.executor.extraJavaOptions -> -Djava.library.path=/home/hadoop/xubo/tools/SparkBWA/build/bwa.zip
spark.app.name -> SparkBWA
spark.yarn.executor.memoryOverhead -> 3704
spark.jars -> file:/home/hadoop/xubo/project/alignment/sparkBWA/SparkBWA.jar
spark.submit.deployMode -> client
spark.yarn.dist.archives -> file:/home/hadoop/xubo/project/alignment/sparkBWA/bwa.zip
spark.eventLog.dir -> file:///home/hadoop/Downloads/hangc/sparklog
spark.master -> yarn-client
Classpath elements:
file:/home/hadoop/xubo/project/alignment/sparkBWA/SparkBWA.jar


Exception in thread "main" java.lang.IllegalArgumentException: Can not create a Path from an empty string
    at org.apache.hadoop.fs.Path.checkPathArg(Path.java:127)
    at org.apache.hadoop.fs.Path.<init>(Path.java:135)
    at BwaInterpreter.SortInHDFS2(BwaInterpreter.java:499)
    at BwaInterpreter.initInterpreter(BwaInterpreter.java:405)
    at BwaInterpreter.<init>(BwaInterpreter.java:94)
    at SparkBWA.main(SparkBWA.java:25)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:674)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

sorting with spark error:
datasetTmp2 = ctx.newAPIHadoopFile(options.getInputPath2(), FastqInputFormat.class, Long.class, String.class,this.conf).persist(StorageLevel.MEMORY_ONLY());

sorting with hdfs error:

pairedDataRDD = this.SortInHDFS2(options.getInputPath(), options.getInputPath2());//.persist(StorageLevel.MEMORY_ONLY());

the code in BwaOptions:

    else if(otherArguments.length == 2){
                inputPath = otherArguments[0];
                outputPath = otherArguments[1];
            }
            else if (otherArguments.length == 3){
                inputPath = otherArguments[0];
                inputPath2 = otherArguments[1];
                outputPath = otherArguments[2];
            }

It should be support single,but why do not support in public void initInterpreter() ?

mvn package compilation failing with certificate error

SparkBWA]$ ../apache-maven-3.3.9/bin/mvn package
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building SparkBWA 0.2
[INFO] ------------------------------------------------------------------------
Downloading: https://mvn.128.no/maven2/cz/adamh/utils/native-utils/1.0-SNAPSHOT/maven-metadata.xml
[WARNING] Could not transfer metadata cz.adamh.utils:native-utils:1.0-SNAPSHOT/maven-metadata.xml from/to 128 (https://mvn.128.no/maven2): sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
[WARNING] Failure to transfer cz.adamh.utils:native-utils:1.0-SNAPSHOT/maven-metadata.xml from https://mvn.128.no/maven2 was cached in the local repository, resolution will not be reattempted until the update interval of 128 has elapsed or updates are forced. Original error: Could not transfer metadata cz.adamh.utils:native-utils:1.0-SNAPSHOT/maven-metadata.xml from/to 128 (https://mvn.128.no/maven2): sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
Downloading: https://mvn.128.no/maven2/cz/adamh/utils/native-utils/1.0-SNAPSHOT/native-utils-1.0-SNAPSHOT.pom
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 9.478 s
[INFO] Finished at: 2017-02-23T19:42:04+03:00
[INFO] Final Memory: 16M/568M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal on project SparkBWA: Could not resolve dependencies for project com.github.sparkbwa:SparkBWA:jar:0.2: Failed to collect dependencies at cz.adamh.utils:native-utils:jar:1.0-SNAPSHOT: Failed to read artifact descriptor for cz.adamh.utils:native-utils:jar:1.0-SNAPSHOT: Could not transfer artifact cz.adamh.utils:native-utils:pom:1.0-SNAPSHOT from/to 128 (https://mvn.128.no/maven2): sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException

spark2.0?

any plans? my limited testing suggests it doesn't 'just work' as is

ERROR YarnScheduler: Lost executor 5 on Mcnode6: remote Rpc client disassociated

When --executor-cores is 2 ,there are problem:
ERROR YarnScheduler: Lost executor 5 on Mcnode6: remote Rpc client disassociated

SparkBWA-0.1 and SparkBWA-0.2 both.

There are no problem when we set --executor-cores is 1, but CPU utilization of less than 50% when every node has two core. (through another methods, we can use bwa multithread to improve performance )

qq 20170121221430

The error is the same as #6

Cannot start SparkBWA

shell:

spark-submit --class SparkBWA \
--master local \
--driver-memory 1500m \
--executor-memory 1500m \
--executor-cores 1 \
--archives bwa.zip \
--verbose \
--num-executors 32 \
SparkBWA.jar \
-algorithm mem -reads paired \
-index /home/hadoop/xubo/ref/GRCH38L1Index/GRCH38chr1L3556522.fasta \
-partitions 3 \
/xubo/alignment/sparkBWA/GRCH38chr1L3556522N10L50paired1.fastq /xubo/alignment/sparkBWA/GRCH38chr1L3556522N10L50paired2.fastq \
/xubo/alignment/output/sparkBWA/datatest1

error:

16/06/19 20:23:03 ERROR Executor: Exception in task 1.0 in stage 3.0 (TID 4)
java.lang.UnsatisfiedLinkError: no bwa in java.library.path
    at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1886)
    at java.lang.Runtime.loadLibrary0(Runtime.java:849)
    at java.lang.System.loadLibrary(System.java:1088)
    at BwaJni.<clinit>(BwaJni.java:44)
    at Bwa.run(Bwa.java:443)
    at BwaRDD$BwaAlignment.call(BwaRDD.java:283)
    at BwaRDD$BwaAlignment.call(BwaRDD.java:173)
    at org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction2$1.apply(JavaPairRDD.scala:1024)
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$mapPartitionsWithIndex$1.apply(JavaRDDLike.scala:102)
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$mapPartitionsWithIndex$1.apply(JavaRDDLike.scala:102)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$18.apply(RDD.scala:727)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$18.apply(RDD.scala:727)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:88)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
16/06/19 20:23:03 ERROR Executor: Exception in task 0.0 in stage 3.0 (TID 3)
java.lang.NoClassDefFoundError: Could not initialize class BwaJni
    at Bwa.run(Bwa.java:443)
    at BwaRDD$BwaAlignment.call(BwaRDD.java:283)
    at BwaRDD$BwaAlignment.call(BwaRDD.java:173)
    at org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction2$1.apply(JavaPairRDD.scala:1024)
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$mapPartitionsWithIndex$1.apply(JavaRDDLike.scala:102)
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$mapPartitionsWithIndex$1.apply(JavaRDDLike.scala:102)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$18.apply(RDD.scala:727)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$18.apply(RDD.scala:727)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:88)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
16/06/19 20:23:03 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[Executor task launch worker-1,5,main]
java.lang.UnsatisfiedLinkError: no bwa in java.library.path
    at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1886)
    at java.lang.Runtime.loadLibrary0(Runtime.java:849)
    at java.lang.System.loadLibrary(System.java:1088)

How to solve the error ?please

The version ?

Which version of the softwares you recommend? spark,hadoop,and java

BUILD FAILURE: Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.5.0:exec

Am trying to install SparkBWA in a docker container but get the following error when I run mvn package. What could be the problem?

[ERROR] Command execution failed.
java.io.IOException: Cannot run program "make" (in directory "/SparkBWA"): error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
at java.lang.Runtime.exec(Runtime.java:620)
at org.apache.commons.exec.launcher.Java13CommandLauncher.exec(Java13CommandLauncher.java:61)
at org.apache.commons.exec.DefaultExecutor.launch(DefaultExecutor.java:279)
at org.apache.commons.exec.DefaultExecutor.executeInternal(DefaultExecutor.java:336)
at org.apache.commons.exec.DefaultExecutor.execute(DefaultExecutor.java:166)
at org.codehaus.mojo.exec.ExecMojo.executeCommandLine(ExecMojo.java:764)
at org.codehaus.mojo.exec.ExecMojo.executeCommandLine(ExecMojo.java:711)
at org.codehaus.mojo.exec.ExecMojo.execute(ExecMojo.java:289)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:141)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
Caused by: java.io.IOException: error=2, No such file or directory
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.(UNIXProcess.java:247)
at java.lang.ProcessImpl.start(ProcessImpl.java:134)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
... 29 more
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 2:20.994s
[INFO] Finished at: Wed May 24 16:46:10 UTC 2017
[INFO] Final Memory: 16M/139M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.5.0:exec (makeBwaClean) on project SparkBWA: Command execution failed. Cannot run program "make" (in directory "/SparkBWA"): error=2, No such file or directory -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
The command '/bin/sh -c mvn clean package' returned a non-zero code: 1

SparkBWA generates empty sam files

/04/21 08:11:45 INFO ContainerManagementProtocolProxy: Opening proxy : slave2.hdp:45454
17/04/21 08:11:48 INFO YarnClusterSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (slave1.hdp:41948) with ID 1
17/04/21 08:11:48 INFO BlockManagerMasterEndpoint: Registering block manager slave1.hdp:38864 with 7.0 GB RAM, BlockManagerId(1, slave1.hdp, 38864)
17/04/21 08:11:48 INFO YarnClusterSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (slave.hdp:50548) with ID 2
17/04/21 08:11:48 INFO BlockManagerMasterEndpoint: Registering block manager slave.hdp:43602 with 7.0 GB RAM, BlockManagerId(2, slave.hdp, 43602)
17/04/21 08:11:48 INFO YarnClusterSchedulerBackend: Registered executor NettyRpcEndpointRef(null) (slave2.hdp:49614) with ID 3
17/04/21 08:11:48 INFO BlockManagerMasterEndpoint: Registering block manager slave2.hdp:46273 with 7.0 GB RAM, BlockManagerId(3, slave2.hdp, 46273)
17/04/21 08:12:14 INFO YarnClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
17/04/21 08:12:14 INFO YarnClusterScheduler: YarnClusterScheduler.postStartHook done
17/04/21 08:12:14 INFO BwaInterpreter: [com.github.sparkbwa.BwaInterpreter] :: Starting BWA
17/04/21 08:12:14 INFO BwaInterpreter: [com.github.sparkbwa.BwaInterpreter] ::Not sorting in HDFS. Timing: 47447818973648
17/04/21 08:12:14 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 341.8 KB, free 341.8 KB)
17/04/21 08:12:14 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 28.3 KB, free 370.2 KB)
17/04/21 08:12:14 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.2.86:32844 (size: 28.3 KB, free: 1140.3 MB)
17/04/21 08:12:14 INFO SparkContext: Created broadcast 0 from textFile at BwaInterpreter.java:149
17/04/21 08:12:14 INFO FileInputFormat: Total input paths to process : 1
17/04/21 08:12:14 INFO SparkContext: Starting job: zipWithIndex at BwaInterpreter.java:152
17/04/21 08:12:14 INFO DAGScheduler: Got job 0 (zipWithIndex at BwaInterpreter.java:152) with 13 output partitions
17/04/21 08:12:14 INFO DAGScheduler: Final stage: ResultStage 0 (zipWithIndex at BwaInterpreter.java:152)
17/04/21 08:12:14 INFO DAGScheduler: Parents of final stage: List()
17/04/21 08:12:14 INFO DAGScheduler: Missing parents: List()
17/04/21 08:12:14 INFO DAGScheduler: Submitting ResultStage 0 (hdfs://master.hdp:8020/SparkBWA/ERR000589_1.filt.fastq MapPartitionsRDD[1] at textFile at BwaInterpreter.java:149), which has no missing parents
17/04/21 08:12:14 ERROR LiveListenerBus: Listener EventLoggingListener threw an exception
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:150)
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:150)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.EventLoggingListener.logEvent(EventLoggingListener.scala:150)
at org.apache.spark.scheduler.EventLoggingListener.onJobStart(EventLoggingListener.scala:173)
at org.apache.spark.scheduler.SparkListenerBus$class.onPostEvent(SparkListenerBus.scala:34)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:55)
at org.apache.spark.util.AsynchronousListenerBus.postToAll(AsynchronousListenerBus.scala:37)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(AsynchronousListenerBus.scala:80)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:65)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:65)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(AsynchronousListenerBus.scala:64)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1181)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1.run(AsynchronousListenerBus.scala:63)
Caused by: java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:818)
at org.apache.hadoop.hdfs.DFSOutputStream.flushOrSync(DFSOutputStream.java:2037)
at org.apache.hadoop.hdfs.DFSOutputStream.hflush(DFSOutputStream.java:1983)
at org.apache.hadoop.fs.FSDataOutputStream.hflush(FSDataOutputStream.java:130)
... 21 more
17/04/21 08:12:14 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 3.0 KB, free 373.2 KB)
17/04/21 08:12:14 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 1857.0 B, free 375.0 KB)
17/04/21 08:12:14 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.2.86:32844 (size: 1857.0 B, free: 1140.3 MB)
17/04/21 08:12:14 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1008
17/04/21 08:12:14 INFO DAGScheduler: Submitting 13 missing tasks from ResultStage 0 (hdfs://master.hdp:8020/SparkBWA/ERR000589_1.filt.fastq MapPartitionsRDD[1] at textFile at BwaInterpreter.java:149)
17/04/21 08:12:14 INFO YarnClusterScheduler: Adding task set 0.0 with 13 tasks
17/04/21 08:12:14 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, slave.hdp, partition 0,NODE_LOCAL, 2156 bytes)
17/04/21 08:12:14 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, slave2.hdp, partition 1,NODE_LOCAL, 2156 bytes)
17/04/21 08:12:14 INFO TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, slave1.hdp, partition 2,NODE_LOCAL, 2156 bytes)
17/04/21 08:12:14 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on slave1.hdp:38864 (size: 1857.0 B, free: 7.0 GB)
17/04/21 08:12:14 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on slave1.hdp:38864 (size: 28.3 KB, free: 7.0 GB)
17/04/21 08:12:14 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on slave.hdp:43602 (size: 1857.0 B, free: 7.0 GB)
17/04/21 08:12:14 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on slave2.hdp:46273 (size: 1857.0 B, free: 7.0 GB)
17/04/21 08:12:15 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on slave.hdp:43602 (size: 28.3 KB, free: 7.0 GB)
17/04/21 08:12:15 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on slave2.hdp:46273 (size: 28.3 KB, free: 7.0 GB)
17/04/21 08:12:17 INFO TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, slave.hdp, partition 3,NODE_LOCAL, 2156 bytes)
17/04/21 08:12:17 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 2556 ms on slave.hdp (1/13)
17/04/21 08:12:17 INFO TaskSetManager: Starting task 4.0 in stage 0.0 (TID 4, slave1.hdp, partition 4,NODE_LOCAL, 2156 bytes)
17/04/21 08:12:17 INFO TaskSetManager: Finished task 2.0 in stage 0.0 (TID 2) in 2662 ms on slave1.hdp (2/13)
17/04/21 08:12:17 INFO TaskSetManager: Starting task 5.0 in stage 0.0 (TID 5, slave2.hdp, partition 5,NODE_LOCAL, 2156 bytes)
17/04/21 08:12:17 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 2904 ms on slave2.hdp (3/13)
17/04/21 08:12:18 INFO TaskSetManager: Starting task 6.0 in stage 0.0 (TID 6, slave.hdp, partition 6,NODE_LOCAL, 2156 bytes)
17/04/21 08:12:18 INFO TaskSetManager: Finished task 3.0 in stage 0.0 (TID 3) in 1708 ms on slave.hdp (4/13)
17/04/21 08:12:18 INFO TaskSetManager: Starting task 7.0 in stage 0.0 (TID 7, slave2.hdp, partition 7,NODE_LOCAL, 2156 bytes)
17/04/21 08:12:18 INFO TaskSetManager: Finished task 5.0 in stage 0.0 (TID 5) in 1406 ms on slave2.hdp (5/13)
17/04/21 08:12:19 INFO TaskSetManager: Finished task 4.0 in stage 0.0 (TID 4) in 2160 ms on slave1.hdp (6/13)
17/04/21 08:12:19 INFO TaskSetManager: Starting task 8.0 in stage 0.0 (TID 8, slave1.hdp, partition 8,NODE_LOCAL, 2156 bytes)
17/04/21 08:12:20 INFO TaskSetManager: Starting task 9.0 in stage 0.0 (TID 9, slave.hdp, partition 9,NODE_LOCAL, 2156 bytes)
17/04/21 08:12:20 INFO TaskSetManager: Finished task 6.0 in stage 0.0 (TID 6) in 1375 ms on slave.hdp (7/13)
17/04/21 08:12:20 INFO TaskSetManager: Starting task 10.0 in stage 0.0 (TID 10, slave2.hdp, partition 10,NODE_LOCAL, 2156 bytes)
17/04/21 08:12:20 INFO TaskSetManager: Finished task 7.0 in stage 0.0 (TID 7) in 1522 ms on slave2.hdp (8/13)
17/04/21 08:12:21 INFO TaskSetManager: Starting task 11.0 in stage 0.0 (TID 11, slave1.hdp, partition 11,NODE_LOCAL, 2156 bytes)
17/04/21 08:12:21 INFO TaskSetManager: Finished task 8.0 in stage 0.0 (TID 8) in 1652 ms on slave1.hdp (9/13)
17/04/21 08:12:21 INFO TaskSetManager: Starting task 12.0 in stage 0.0 (TID 12, slave.hdp, partition 12,NODE_LOCAL, 2156 bytes)
17/04/21 08:12:21 INFO TaskSetManager: Finished task 9.0 in stage 0.0 (TID 9) in 1603 ms on slave.hdp (10/13)
17/04/21 08:12:21 INFO TaskSetManager: Finished task 11.0 in stage 0.0 (TID 11) in 877 ms on slave1.hdp (11/13)
17/04/21 08:12:22 INFO TaskSetManager: Finished task 10.0 in stage 0.0 (TID 10) in 1659 ms on slave2.hdp (12/13)
17/04/21 08:12:23 INFO TaskSetManager: Finished task 12.0 in stage 0.0 (TID 12) in 1581 ms on slave.hdp (13/13)
17/04/21 08:12:23 INFO DAGScheduler: ResultStage 0 (zipWithIndex at BwaInterpreter.java:152) finished in 8.808 s
17/04/21 08:12:23 INFO YarnClusterScheduler: Removed TaskSet 0.0, whose tasks have all completed, from pool
17/04/21 08:12:23 INFO DAGScheduler: Job 0 finished: zipWithIndex at BwaInterpreter.java:152, took 8.884092 s
17/04/21 08:12:23 ERROR LiveListenerBus: Listener EventLoggingListener threw an exception
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:150)
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:150)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.EventLoggingListener.logEvent(EventLoggingListener.scala:150)
at org.apache.spark.scheduler.EventLoggingListener.onStageCompleted(EventLoggingListener.scala:170)
at org.apache.spark.scheduler.SparkListenerBus$class.onPostEvent(SparkListenerBus.scala:32)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:55)
at org.apache.spark.util.AsynchronousListenerBus.postToAll(AsynchronousListenerBus.scala:37)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(AsynchronousListenerBus.scala:80)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:65)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:65)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(AsynchronousListenerBus.scala:64)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1181)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1.run(AsynchronousListenerBus.scala:63)
Caused by: java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:818)
at org.apache.hadoop.hdfs.DFSOutputStream.flushOrSync(DFSOutputStream.java:2037)
at org.apache.hadoop.hdfs.DFSOutputStream.hflush(DFSOutputStream.java:1983)
at org.apache.hadoop.fs.FSDataOutputStream.hflush(FSDataOutputStream.java:130)
... 21 more
17/04/21 08:12:23 ERROR LiveListenerBus: Listener EventLoggingListener threw an exception
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:150)
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:150)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.EventLoggingListener.logEvent(EventLoggingListener.scala:150)
at org.apache.spark.scheduler.EventLoggingListener.onJobEnd(EventLoggingListener.scala:175)
at org.apache.spark.scheduler.SparkListenerBus$class.onPostEvent(SparkListenerBus.scala:36)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:55)
at org.apache.spark.util.AsynchronousListenerBus.postToAll(AsynchronousListenerBus.scala:37)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(AsynchronousListenerBus.scala:80)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:65)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:65)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(AsynchronousListenerBus.scala:64)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1181)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1.run(AsynchronousListenerBus.scala:63)
Caused by: java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:818)
at org.apache.hadoop.hdfs.DFSOutputStream.flushOrSync(DFSOutputStream.java:2037)
at org.apache.hadoop.hdfs.DFSOutputStream.hflush(DFSOutputStream.java:1983)
at org.apache.hadoop.fs.FSDataOutputStream.hflush(FSDataOutputStream.java:130)
... 21 more
17/04/21 08:12:23 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 341.9 KB, free 716.9 KB)
17/04/21 08:12:23 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 28.3 KB, free 745.2 KB)
17/04/21 08:12:23 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on 192.168.2.86:32844 (size: 28.3 KB, free: 1140.3 MB)
17/04/21 08:12:23 INFO SparkContext: Created broadcast 2 from textFile at BwaInterpreter.java:149
17/04/21 08:12:23 INFO FileInputFormat: Total input paths to process : 1
17/04/21 08:12:23 INFO SparkContext: Starting job: zipWithIndex at BwaInterpreter.java:152
17/04/21 08:12:23 INFO DAGScheduler: Got job 1 (zipWithIndex at BwaInterpreter.java:152) with 13 output partitions
17/04/21 08:12:23 INFO DAGScheduler: Final stage: ResultStage 1 (zipWithIndex at BwaInterpreter.java:152)
17/04/21 08:12:23 INFO DAGScheduler: Parents of final stage: List()
17/04/21 08:12:23 INFO DAGScheduler: Missing parents: List()
17/04/21 08:12:23 INFO DAGScheduler: Submitting ResultStage 1 (hdfs://master.hdp:8020/SparkBWA/ERR000589_2.filt.fastq MapPartitionsRDD[8] at textFile at BwaInterpreter.java:149), which has no missing parents
17/04/21 08:12:23 ERROR LiveListenerBus: Listener EventLoggingListener threw an exception
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:150)
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:150)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.EventLoggingListener.logEvent(EventLoggingListener.scala:150)
at org.apache.spark.scheduler.EventLoggingListener.onJobStart(EventLoggingListener.scala:173)
at org.apache.spark.scheduler.SparkListenerBus$class.onPostEvent(SparkListenerBus.scala:34)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:55)
at org.apache.spark.util.AsynchronousListenerBus.postToAll(AsynchronousListenerBus.scala:37)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(AsynchronousListenerBus.scala:80)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:65)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:65)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(AsynchronousListenerBus.scala:64)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1181)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1.run(AsynchronousListenerBus.scala:63)
Caused by: java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:818)
at org.apache.hadoop.hdfs.DFSOutputStream.flushOrSync(DFSOutputStream.java:2037)
at org.apache.hadoop.hdfs.DFSOutputStream.hflush(DFSOutputStream.java:1983)
at org.apache.hadoop.fs.FSDataOutputStream.hflush(FSDataOutputStream.java:130)
... 21 more
17/04/21 08:12:23 INFO MemoryStore: Block broadcast_3 stored as values in memory (estimated size 3.0 KB, free 748.2 KB)
17/04/21 08:12:23 INFO MemoryStore: Block broadcast_3_piece0 stored as bytes in memory (estimated size 1863.0 B, free 750.1 KB)
17/04/21 08:12:23 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on 192.168.2.86:32844 (size: 1863.0 B, free: 1140.3 MB)
17/04/21 08:12:23 INFO SparkContext: Created broadcast 3 from broadcast at DAGScheduler.scala:1008
17/04/21 08:12:23 INFO DAGScheduler: Submitting 13 missing tasks from ResultStage 1 (hdfs://master.hdp:8020/SparkBWA/ERR000589_2.filt.fastq MapPartitionsRDD[8] at textFile at BwaInterpreter.java:149)
17/04/21 08:12:23 INFO YarnClusterScheduler: Adding task set 1.0 with 13 tasks
17/04/21 08:12:23 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 13, slave1.hdp, partition 0,NODE_LOCAL, 2156 bytes)
17/04/21 08:12:23 INFO TaskSetManager: Starting task 1.0 in stage 1.0 (TID 14, slave2.hdp, partition 1,NODE_LOCAL, 2156 bytes)
17/04/21 08:12:23 INFO TaskSetManager: Starting task 2.0 in stage 1.0 (TID 15, slave.hdp, partition 2,NODE_LOCAL, 2156 bytes)
17/04/21 08:12:23 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on slave1.hdp:38864 (size: 1863.0 B, free: 7.0 GB)
17/04/21 08:12:23 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on slave.hdp:43602 (size: 1863.0 B, free: 7.0 GB)
17/04/21 08:12:23 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on slave2.hdp:46273 (size: 1863.0 B, free: 7.0 GB)
17/04/21 08:12:23 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on slave1.hdp:38864 (size: 28.3 KB, free: 7.0 GB)
17/04/21 08:12:23 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on slave2.hdp:46273 (size: 28.3 KB, free: 7.0 GB)
17/04/21 08:12:23 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on slave.hdp:43602 (size: 28.3 KB, free: 7.0 GB)
17/04/21 08:12:24 INFO TaskSetManager: Starting task 3.0 in stage 1.0 (TID 16, slave.hdp, partition 3,NODE_LOCAL, 2156 bytes)
17/04/21 08:12:24 INFO TaskSetManager: Finished task 2.0 in stage 1.0 (TID 15) in 1111 ms on slave.hdp (1/13)
17/04/21 08:12:24 INFO TaskSetManager: Starting task 4.0 in stage 1.0 (TID 17, slave1.hdp, partition 4,NODE_LOCAL, 2156 bytes)
17/04/21 08:12:24 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 13) in 1233 ms on slave1.hdp (2/13)
17/04/21 08:12:24 INFO TaskSetManager: Starting task 5.0 in stage 1.0 (TID 18, slave2.hdp, partition 5,NODE_LOCAL, 2156 bytes)
17/04/21 08:12:24 INFO TaskSetManager: Finished task 1.0 in stage 1.0 (TID 14) in 1415 ms on slave2.hdp (3/13)
17/04/21 08:12:25 INFO TaskSetManager: Starting task 6.0 in stage 1.0 (TID 19, slave1.hdp, partition 6,NODE_LOCAL, 2156 bytes)
17/04/21 08:12:25 INFO TaskSetManager: Finished task 4.0 in stage 1.0 (TID 17) in 861 ms on slave1.hdp (4/13)
17/04/21 08:12:25 INFO TaskSetManager: Starting task 7.0 in stage 1.0 (TID 20, slave.hdp, partition 7,NODE_LOCAL, 2156 bytes)
17/04/21 08:12:25 INFO TaskSetManager: Finished task 3.0 in stage 1.0 (TID 16) in 1128 ms on slave.hdp (5/13)
17/04/21 08:12:25 INFO TaskSetManager: Starting task 8.0 in stage 1.0 (TID 21, slave2.hdp, partition 8,NODE_LOCAL, 2156 bytes)
17/04/21 08:12:25 INFO TaskSetManager: Finished task 5.0 in stage 1.0 (TID 18) in 1075 ms on slave2.hdp (6/13)
17/04/21 08:12:26 INFO TaskSetManager: Starting task 9.0 in stage 1.0 (TID 22, slave1.hdp, partition 9,NODE_LOCAL, 2156 bytes)
17/04/21 08:12:26 INFO TaskSetManager: Finished task 6.0 in stage 1.0 (TID 19) in 852 ms on slave1.hdp (7/13)
17/04/21 08:12:26 INFO TaskSetManager: Starting task 10.0 in stage 1.0 (TID 23, slave.hdp, partition 10,NODE_LOCAL, 2156 bytes)
17/04/21 08:12:26 INFO TaskSetManager: Finished task 7.0 in stage 1.0 (TID 20) in 969 ms on slave.hdp (8/13)
17/04/21 08:12:27 INFO TaskSetManager: Starting task 11.0 in stage 1.0 (TID 24, slave2.hdp, partition 11,NODE_LOCAL, 2156 bytes)
17/04/21 08:12:27 INFO TaskSetManager: Finished task 8.0 in stage 1.0 (TID 21) in 1091 ms on slave2.hdp (9/13)
17/04/21 08:12:27 INFO TaskSetManager: Starting task 12.0 in stage 1.0 (TID 25, slave1.hdp, partition 12,NODE_LOCAL, 2156 bytes)
17/04/21 08:12:27 INFO TaskSetManager: Finished task 9.0 in stage 1.0 (TID 22) in 891 ms on slave1.hdp (10/13)
17/04/21 08:12:27 INFO TaskSetManager: Finished task 10.0 in stage 1.0 (TID 23) in 976 ms on slave.hdp (11/13)
17/04/21 08:12:28 INFO TaskSetManager: Finished task 12.0 in stage 1.0 (TID 25) in 861 ms on slave1.hdp (12/13)
17/04/21 08:12:28 INFO TaskSetManager: Finished task 11.0 in stage 1.0 (TID 24) in 1159 ms on slave2.hdp (13/13)
17/04/21 08:12:28 INFO DAGScheduler: ResultStage 1 (zipWithIndex at BwaInterpreter.java:152) finished in 4.737 s
17/04/21 08:12:28 INFO YarnClusterScheduler: Removed TaskSet 1.0, whose tasks have all completed, from pool
17/04/21 08:12:28 INFO DAGScheduler: Job 1 finished: zipWithIndex at BwaInterpreter.java:152, took 4.746554 s
17/04/21 08:12:28 ERROR LiveListenerBus: Listener EventLoggingListener threw an exception
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:150)
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:150)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.EventLoggingListener.logEvent(EventLoggingListener.scala:150)
at org.apache.spark.scheduler.EventLoggingListener.onStageCompleted(EventLoggingListener.scala:170)
at org.apache.spark.scheduler.SparkListenerBus$class.onPostEvent(SparkListenerBus.scala:32)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:55)
at org.apache.spark.util.AsynchronousListenerBus.postToAll(AsynchronousListenerBus.scala:37)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(AsynchronousListenerBus.scala:80)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:65)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:65)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(AsynchronousListenerBus.scala:64)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1181)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1.run(AsynchronousListenerBus.scala:63)
Caused by: java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:818)
at org.apache.hadoop.hdfs.DFSOutputStream.flushOrSync(DFSOutputStream.java:2037)
at org.apache.hadoop.hdfs.DFSOutputStream.hflush(DFSOutputStream.java:1983)
at org.apache.hadoop.fs.FSDataOutputStream.hflush(FSDataOutputStream.java:130)
... 21 more
17/04/21 08:12:28 ERROR LiveListenerBus: Listener EventLoggingListener threw an exception
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:150)
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:150)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.EventLoggingListener.logEvent(EventLoggingListener.scala:150)
at org.apache.spark.scheduler.EventLoggingListener.onJobEnd(EventLoggingListener.scala:175)
at org.apache.spark.scheduler.SparkListenerBus$class.onPostEvent(SparkListenerBus.scala:36)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:55)
at org.apache.spark.util.AsynchronousListenerBus.postToAll(AsynchronousListenerBus.scala:37)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(AsynchronousListenerBus.scala:80)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:65)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:65)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(AsynchronousListenerBus.scala:64)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1181)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1.run(AsynchronousListenerBus.scala:63)
Caused by: java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:818)
at org.apache.hadoop.hdfs.DFSOutputStream.flushOrSync(DFSOutputStream.java:2037)
at org.apache.hadoop.hdfs.DFSOutputStream.hflush(DFSOutputStream.java:1983)
at org.apache.hadoop.fs.FSDataOutputStream.hflush(FSDataOutputStream.java:130)
... 21 more
17/04/21 08:12:28 INFO MapPartitionsRDD: Removing RDD 6 from persistence list
17/04/21 08:12:28 INFO BlockManager: Removing RDD 6
17/04/21 08:12:28 INFO MapPartitionsRDD: Removing RDD 13 from persistence list
17/04/21 08:12:28 INFO BlockManager: Removing RDD 13
17/04/21 08:12:28 ERROR LiveListenerBus: Listener EventLoggingListener threw an exception
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:150)
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:150)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.EventLoggingListener.logEvent(EventLoggingListener.scala:150)
at org.apache.spark.scheduler.EventLoggingListener.onUnpersistRDD(EventLoggingListener.scala:186)
at org.apache.spark.scheduler.SparkListenerBus$class.onPostEvent(SparkListenerBus.scala:50)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:55)
at org.apache.spark.util.AsynchronousListenerBus.postToAll(AsynchronousListenerBus.scala:37)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(AsynchronousListenerBus.scala:80)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:65)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:65)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(AsynchronousListenerBus.scala:64)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1181)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1.run(AsynchronousListenerBus.scala:63)
Caused by: java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:818)
at org.apache.hadoop.hdfs.DFSOutputStream.flushOrSync(DFSOutputStream.java:2037)
at org.apache.hadoop.hdfs.DFSOutputStream.hflush(DFSOutputStream.java:1983)
at org.apache.hadoop.fs.FSDataOutputStream.hflush(FSDataOutputStream.java:130)
... 21 more
17/04/21 08:12:28 INFO BwaInterpreter: [com.github.sparkbwa.BwaInterpreter] :: No sort with partitioning
17/04/21 08:12:28 ERROR LiveListenerBus: Listener EventLoggingListener threw an exception
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:150)
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:150)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.EventLoggingListener.logEvent(EventLoggingListener.scala:150)
at org.apache.spark.scheduler.EventLoggingListener.onUnpersistRDD(EventLoggingListener.scala:186)
at org.apache.spark.scheduler.SparkListenerBus$class.onPostEvent(SparkListenerBus.scala:50)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:55)
at org.apache.spark.util.AsynchronousListenerBus.postToAll(AsynchronousListenerBus.scala:37)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(AsynchronousListenerBus.scala:80)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:65)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:65)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(AsynchronousListenerBus.scala:64)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1181)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1.run(AsynchronousListenerBus.scala:63)
Caused by: java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:818)
at org.apache.hadoop.hdfs.DFSOutputStream.flushOrSync(DFSOutputStream.java:2037)
at org.apache.hadoop.hdfs.DFSOutputStream.hflush(DFSOutputStream.java:1983)
at org.apache.hadoop.fs.FSDataOutputStream.hflush(FSDataOutputStream.java:130)
... 21 more
17/04/21 08:12:28 INFO BwaInterpreter: [com.github.sparkbwa.BwaInterpreter] :: Repartition with no sort
17/04/21 08:12:28 INFO BwaInterpreter: [com.github.sparkbwa.BwaInterpreter] :: End of sorting. Timing: 47461927545800
17/04/21 08:12:28 INFO BwaInterpreter: [com.github.sparkbwa.BwaInterpreter] :: Total time: 0.23514286920000002 minutes
17/04/21 08:12:28 INFO BwaAlignmentBase: [com.github.sparkbwa.BwaPairedAlignment] :: application_1492697141087_0027 - SparkBWA_ERR000589_1.filt.fastq-32-NoSort
17/04/21 08:12:28 INFO SparkContext: Starting job: collect at BwaInterpreter.java:305
17/04/21 08:12:28 INFO DAGScheduler: Registering RDD 3 (mapToPair at BwaInterpreter.java:152)
17/04/21 08:12:28 INFO DAGScheduler: Registering RDD 10 (mapToPair at BwaInterpreter.java:152)
17/04/21 08:12:28 INFO DAGScheduler: Registering RDD 17 (repartition at BwaInterpreter.java:281)
17/04/21 08:12:28 INFO DAGScheduler: Got job 2 (collect at BwaInterpreter.java:305) with 32 output partitions
17/04/21 08:12:28 INFO DAGScheduler: Final stage: ResultStage 5 (collect at BwaInterpreter.java:305)
17/04/21 08:12:28 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 4)
17/04/21 08:12:28 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 4)
17/04/21 08:12:28 INFO DAGScheduler: Submitting ShuffleMapStage 2 (MapPartitionsRDD[3] at mapToPair at BwaInterpreter.java:152), which has no missing parents
17/04/21 08:12:28 ERROR LiveListenerBus: Listener EventLoggingListener threw an exception
java.lang.reflect.InvocationTargetException
at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:150)
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:150)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.EventLoggingListener.logEvent(EventLoggingListener.scala:150)
at org.apache.spark.scheduler.EventLoggingListener.onJobStart(EventLoggingListener.scala:173)
at org.apache.spark.scheduler.SparkListenerBus$class.onPostEvent(SparkListenerBus.scala:34)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:55)
at org.apache.spark.util.AsynchronousListenerBus.postToAll(AsynchronousListenerBus.scala:37)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(AsynchronousListenerBus.scala:80)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:65)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:65)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(AsynchronousListenerBus.scala:64)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1181)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1.run(AsynchronousListenerBus.scala:63)
Caused by: java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:818)
at org.apache.hadoop.hdfs.DFSOutputStream.flushOrSync(DFSOutputStream.java:2037)
at org.apache.hadoop.hdfs.DFSOutputStream.hflush(DFSOutputStream.java:1983)
at org.apache.hadoop.fs.FSDataOutputStream.hflush(FSDataOutputStream.java:130)
... 20 more
17/04/21 08:12:28 INFO MemoryStore: Block broadcast_4 stored as values in memory (estimated size 5.3 KB, free 755.3 KB)
17/04/21 08:12:28 INFO MemoryStore: Block broadcast_4_piece0 stored as bytes in memory (estimated size 2.8 KB, free 758.2 KB)
17/04/21 08:12:28 INFO BlockManagerInfo: Added broadcast_4_piece0 in memory on 192.168.2.86:32844 (size: 2.8 KB, free: 1140.3 MB)
17/04/21 08:12:28 INFO SparkContext: Created broadcast 4 from broadcast at DAGScheduler.scala:1008
17/04/21 08:12:28 INFO DAGScheduler: Submitting 14 missing tasks from ShuffleMapStage 2 (MapPartitionsRDD[3] at mapToPair at BwaInterpreter.java:152)
17/04/21 08:12:28 INFO YarnClusterScheduler: Adding task set 2.0 with 14 tasks
17/04/21 08:12:28 INFO DAGScheduler: Submitting ShuffleMapStage 3 (MapPartitionsRDD[10] at mapToPair at BwaInterpreter.java:152), which has no missing parents
17/04/21 08:12:28 INFO TaskSetManager: Starting task 0.0 in stage 2.0 (TID 26, slave1.hdp, partition 0,NODE_LOCAL, 2255 bytes)
17/04/21 08:12:28 INFO TaskSetManager: Starting task 1.0 in stage 2.0 (TID 27, slave2.hdp, partition 1,NODE_LOCAL, 2255 bytes)
17/04/21 08:12:28 INFO TaskSetManager: Starting task 2.0 in stage 2.0 (TID 28, slave.hdp, partition 2,NODE_LOCAL, 2255 bytes)
17/04/21 08:12:28 INFO MemoryStore: Block broadcast_5 stored as values in memory (estimated size 5.3 KB, free 763.4 KB)
17/04/21 08:12:28 INFO BlockManagerInfo: Added broadcast_4_piece0 in memory on slave1.hdp:38864 (size: 2.8 KB, free: 7.0 GB)
17/04/21 08:12:28 INFO MemoryStore: Block broadcast_5_piece0 stored as bytes in memory (estimated size 2.8 KB, free 766.3 KB)
17/04/21 08:12:28 INFO BlockManagerInfo: Added broadcast_5_piece0 in memory on 192.168.2.86:32844 (size: 2.8 KB, free: 1140.3 MB)
17/04/21 08:12:28 INFO SparkContext: Created broadcast 5 from broadcast at DAGScheduler.scala:1008
17/04/21 08:12:28 INFO DAGScheduler: Submitting 14 missing tasks from ShuffleMapStage 3 (MapPartitionsRDD[10] at mapToPair at BwaInterpreter.java:152)
17/04/21 08:12:28 INFO BlockManagerInfo: Added broadcast_4_piece0 in memory on slave.hdp:43602 (size: 2.8 KB, free: 7.0 GB)
17/04/21 08:12:28 INFO YarnClusterScheduler: Adding task set 3.0 with 14 tasks
17/04/21 08:12:28 INFO BlockManagerInfo: Added broadcast_4_piece0 in memory on slave2.hdp:46273 (size: 2.8 KB, free: 7.0 GB)
17/04/21 08:12:34 INFO TaskSetManager: Starting task 3.0 in stage 2.0 (TID 29, slave2.hdp, partition 3,NODE_LOCAL, 2255 bytes)
17/04/21 08:12:34 INFO TaskSetManager: Finished task 1.0 in stage 2.0 (TID 27) in 6478 ms on slave2.hdp (1/14)
17/04/21 08:12:36 INFO TaskSetManager: Starting task 4.0 in stage 2.0 (TID 30, slave.hdp, partition 4,NODE_LOCAL, 2255 bytes)
17/04/21 08:12:36 INFO TaskSetManager: Finished task 2.0 in stage 2.0 (TID 28) in 8047 ms on slave.hdp (2/14)
17/04/21 08:12:36 INFO TaskSetManager: Starting task 5.0 in stage 2.0 (TID 31, slave1.hdp, partition 5,NODE_LOCAL, 2255 bytes)
17/04/21 08:12:36 INFO TaskSetManager: Finished task 0.0 in stage 2.0 (TID 26) in 8334 ms on slave1.hdp (3/14)
17/04/21 08:12:42 INFO TaskSetManager: Starting task 6.0 in stage 2.0 (TID 32, slave2.hdp, partition 6,NODE_LOCAL, 2255 bytes)
17/04/21 08:12:42 INFO TaskSetManager: Finished task 3.0 in stage 2.0 (TID 29) in 7902 ms on slave2.hdp (4/14)
17/04/21 08:12:44 INFO TaskSetManager: Starting task 7.0 in stage 2.0 (TID 33, slave.hdp, partition 7,NODE_LOCAL, 2255 bytes)
17/04/21 08:12:45 INFO TaskSetManager: Finished task 4.0 in stage 2.0 (TID 30) in 8723 ms on slave.hdp (5/14)
17/04/21 08:12:45 INFO TaskSetManager: Starting task 8.0 in stage 2.0 (TID 34, slave1.hdp, partition 8,NODE_LOCAL, 2255 bytes)
17/04/21 08:12:45 INFO TaskSetManager: Finished task 5.0 in stage 2.0 (TID 31) in 8952 ms on slave1.hdp (6/14)
17/04/21 08:12:50 INFO TaskSetManager: Starting task 9.0 in stage 2.0 (TID 35, slave2.hdp, partition 9,NODE_LOCAL, 2255 bytes)
17/04/21 08:12:50 INFO TaskSetManager: Finished task 6.0 in stage 2.0 (TID 32) in 8277 ms on slave2.hdp (7/14)
17/04/21 08:12:51 INFO TaskSetManager: Starting task 10.0 in stage 2.0 (TID 36, slave1.hdp, partition 10,NODE_LOCAL, 2255 bytes)
17/04/21 08:12:51 INFO TaskSetManager: Finished task 8.0 in stage 2.0 (TID 34) in 6232 ms on slave1.hdp (8/14)
17/04/21 08:12:53 INFO TaskSetManager: Starting task 11.0 in stage 2.0 (TID 37, slave.hdp, partition 11,NODE_LOCAL, 2255 bytes)
17/04/21 08:12:53 INFO TaskSetManager: Finished task 7.0 in stage 2.0 (TID 33) in 8497 ms on slave.hdp (9/14)
17/04/21 08:12:59 INFO TaskSetManager: Starting task 12.0 in stage 2.0 (TID 38, slave2.hdp, partition 12,NODE_LOCAL, 2255 bytes)
17/04/21 08:12:59 INFO TaskSetManager: Finished task 9.0 in stage 2.0 (TID 35) in 8242 ms on slave2.hdp (10/14)
17/04/21 08:12:59 INFO TaskSetManager: Starting task 13.0 in stage 2.0 (TID 39, slave1.hdp, partition 13,NODE_LOCAL, 2255 bytes)
17/04/21 08:12:59 INFO TaskSetManager: Finished task 10.0 in stage 2.0 (TID 36) in 7685 ms on slave1.hdp (11/14)
17/04/21 08:13:01 INFO TaskSetManager: Starting task 0.0 in stage 3.0 (TID 40, slave.hdp, partition 0,NODE_LOCAL, 2255 bytes)
17/04/21 08:13:01 INFO TaskSetManager: Finished task 11.0 in stage 2.0 (TID 37) in 8151 ms on slave.hdp (12/14)
17/04/21 08:13:01 INFO BlockManagerInfo: Added broadcast_5_piece0 in memory on slave.hdp:43602 (size: 2.8 KB, free: 7.0 GB)
17/04/21 08:13:03 INFO TaskSetManager: Starting task 1.0 in stage 3.0 (TID 41, slave1.hdp, partition 1,NODE_LOCAL, 2255 bytes)
17/04/21 08:13:03 INFO TaskSetManager: Finished task 13.0 in stage 2.0 (TID 39) in 3730 ms on slave1.hdp (13/14)
17/04/21 08:13:03 INFO BlockManagerInfo: Added broadcast_5_piece0 in memory on slave1.hdp:38864 (size: 2.8 KB, free: 7.0 GB)
17/04/21 08:13:08 INFO TaskSetManager: Starting task 2.0 in stage 3.0 (TID 42, slave2.hdp, partition 2,NODE_LOCAL, 2255 bytes)
17/04/21 08:13:08 INFO TaskSetManager: Finished task 12.0 in stage 2.0 (TID 38) in 9660 ms on slave2.hdp (14/14)
17/04/21 08:13:08 INFO YarnClusterScheduler: Removed TaskSet 2.0, whose tasks have all completed, from pool
17/04/21 08:13:08 INFO DAGScheduler: ShuffleMapStage 2 (mapToPair at BwaInterpreter.java:152) finished in 40.556 s
17/04/21 08:13:08 INFO DAGScheduler: looking for newly runnable stages
17/04/21 08:13:08 INFO DAGScheduler: running: Set(ShuffleMapStage 3)
17/04/21 08:13:08 INFO DAGScheduler: waiting: Set(ResultStage 5, ShuffleMapStage 4)
17/04/21 08:13:08 ERROR LiveListenerBus: Listener EventLoggingListener threw an exception
java.lang.reflect.InvocationTargetException
at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:150)
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:150)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.EventLoggingListener.logEvent(EventLoggingListener.scala:150)
at org.apache.spark.scheduler.EventLoggingListener.onStageCompleted(EventLoggingListener.scala:170)
at org.apache.spark.scheduler.SparkListenerBus$class.onPostEvent(SparkListenerBus.scala:32)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:55)
at org.apache.spark.util.AsynchronousListenerBus.postToAll(AsynchronousListenerBus.scala:37)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(AsynchronousListenerBus.scala:80)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:65)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:65)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(AsynchronousListenerBus.scala:64)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1181)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1.run(AsynchronousListenerBus.scala:63)
Caused by: java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:818)
at org.apache.hadoop.hdfs.DFSOutputStream.flushOrSync(DFSOutputStream.java:2037)
at org.apache.hadoop.hdfs.DFSOutputStream.hflush(DFSOutputStream.java:1983)
at org.apache.hadoop.fs.FSDataOutputStream.hflush(FSDataOutputStream.java:130)
... 20 more
17/04/21 08:13:08 INFO DAGScheduler: failed: Set()
17/04/21 08:13:08 INFO BlockManagerInfo: Added broadcast_5_piece0 in memory on slave2.hdp:46273 (size: 2.8 KB, free: 7.0 GB)
17/04/21 08:13:09 INFO TaskSetManager: Starting task 3.0 in stage 3.0 (TID 43, slave.hdp, partition 3,NODE_LOCAL, 2255 bytes)
17/04/21 08:13:09 INFO TaskSetManager: Finished task 0.0 in stage 3.0 (TID 40) in 8088 ms on slave.hdp (1/14)
17/04/21 08:13:10 INFO TaskSetManager: Starting task 4.0 in stage 3.0 (TID 44, slave1.hdp, partition 4,NODE_LOCAL, 2255 bytes)
17/04/21 08:13:10 INFO TaskSetManager: Finished task 1.0 in stage 3.0 (TID 41) in 7719 ms on slave1.hdp (2/14)
17/04/21 08:13:16 INFO TaskSetManager: Starting task 5.0 in stage 3.0 (TID 45, slave.hdp, partition 5,NODE_LOCAL, 2255 bytes)
17/04/21 08:13:16 INFO TaskSetManager: Finished task 3.0 in stage 3.0 (TID 43) in 6821 ms on slave.hdp (3/14)
17/04/21 08:13:16 INFO TaskSetManager: Starting task 6.0 in stage 3.0 (TID 46, slave2.hdp, partition 6,NODE_LOCAL, 2255 bytes)
17/04/21 08:13:16 INFO TaskSetManager: Finished task 2.0 in stage 3.0 (TID 42) in 7732 ms on slave2.hdp (4/14)
17/04/21 08:13:17 INFO TaskSetManager: Starting task 7.0 in stage 3.0 (TID 47, slave1.hdp, partition 7,NODE_LOCAL, 2255 bytes)
17/04/21 08:13:17 INFO TaskSetManager: Finished task 4.0 in stage 3.0 (TID 44) in 6100 ms on slave1.hdp (5/14)
17/04/21 08:13:24 INFO TaskSetManager: Starting task 8.0 in stage 3.0 (TID 48, slave.hdp, partition 8,NODE_LOCAL, 2255 bytes)
17/04/21 08:13:24 INFO TaskSetManager: Finished task 5.0 in stage 3.0 (TID 45) in 7780 ms on slave.hdp (6/14)
17/04/21 08:13:24 INFO TaskSetManager: Starting task 9.0 in stage 3.0 (TID 49, slave2.hdp, partition 9,NODE_LOCAL, 2255 bytes)
17/04/21 08:13:24 INFO TaskSetManager: Finished task 6.0 in stage 3.0 (TID 46) in 7916 ms on slave2.hdp (7/14)
17/04/21 08:13:25 INFO TaskSetManager: Starting task 10.0 in stage 3.0 (TID 50, slave1.hdp, partition 10,NODE_LOCAL, 2255 bytes)
17/04/21 08:13:25 INFO TaskSetManager: Finished task 7.0 in stage 3.0 (TID 47) in 8519 ms on slave1.hdp (8/14)
17/04/21 08:13:31 INFO TaskSetManager: Starting task 11.0 in stage 3.0 (TID 51, slave.hdp, partition 11,NODE_LOCAL, 2255 bytes)
17/04/21 08:13:31 INFO TaskSetManager: Finished task 8.0 in stage 3.0 (TID 48) in 7668 ms on slave.hdp (9/14)
17/04/21 08:13:32 INFO TaskSetManager: Starting task 12.0 in stage 3.0 (TID 52, slave2.hdp, partition 12,NODE_LOCAL, 2255 bytes)
17/04/21 08:13:32 INFO TaskSetManager: Finished task 9.0 in stage 3.0 (TID 49) in 7587 ms on slave2.hdp (10/14)
17/04/21 08:13:33 INFO TaskSetManager: Starting task 13.0 in stage 3.0 (TID 53, slave1.hdp, partition 13,NODE_LOCAL, 2255 bytes)
17/04/21 08:13:33 INFO TaskSetManager: Finished task 10.0 in stage 3.0 (TID 50) in 8168 ms on slave1.hdp (11/14)
17/04/21 08:13:37 INFO TaskSetManager: Finished task 13.0 in stage 3.0 (TID 53) in 4209 ms on slave1.hdp (12/14)
17/04/21 08:13:39 INFO TaskSetManager: Finished task 11.0 in stage 3.0 (TID 51) in 7536 ms on slave.hdp (13/14)
17/04/21 08:13:40 INFO TaskSetManager: Finished task 12.0 in stage 3.0 (TID 52) in 7997 ms on slave2.hdp (14/14)
17/04/21 08:13:40 INFO DAGScheduler: ShuffleMapStage 3 (mapToPair at BwaInterpreter.java:152) finished in 71.731 s
17/04/21 08:13:40 INFO DAGScheduler: looking for newly runnable stages
17/04/21 08:13:40 INFO YarnClusterScheduler: Removed TaskSet 3.0, whose tasks have all completed, from pool
17/04/21 08:13:40 ERROR LiveListenerBus: Listener EventLoggingListener threw an exception
java.lang.reflect.InvocationTargetException
at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:150)
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:150)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.EventLoggingListener.logEvent(EventLoggingListener.scala:150)
at org.apache.spark.scheduler.EventLoggingListener.onStageCompleted(EventLoggingListener.scala:170)
at org.apache.spark.scheduler.SparkListenerBus$class.onPostEvent(SparkListenerBus.scala:32)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:55)
at org.apache.spark.util.AsynchronousListenerBus.postToAll(AsynchronousListenerBus.scala:37)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(AsynchronousListenerBus.scala:80)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:65)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:65)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(AsynchronousListenerBus.scala:64)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1181)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1.run(AsynchronousListenerBus.scala:63)
Caused by: java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:818)
at org.apache.hadoop.hdfs.DFSOutputStream.flushOrSync(DFSOutputStream.java:2037)
at org.apache.hadoop.hdfs.DFSOutputStream.hflush(DFSOutputStream.java:1983)
at org.apache.hadoop.fs.FSDataOutputStream.hflush(FSDataOutputStream.java:130)
... 20 more
17/04/21 08:13:40 INFO DAGScheduler: running: Set()
17/04/21 08:13:40 INFO DAGScheduler: waiting: Set(ResultStage 5, ShuffleMapStage 4)
17/04/21 08:13:40 INFO DAGScheduler: failed: Set()
17/04/21 08:13:40 INFO DAGScheduler: Submitting ShuffleMapStage 4 (MapPartitionsRDD[17] at repartition at BwaInterpreter.java:281), which has no missing parents
17/04/21 08:13:40 INFO MemoryStore: Block broadcast_6 stored as values in memory (estimated size 8.3 KB, free 774.6 KB)
17/04/21 08:13:40 INFO MemoryStore: Block broadcast_6_piece0 stored as bytes in memory (estimated size 3.9 KB, free 778.5 KB)
17/04/21 08:13:40 INFO BlockManagerInfo: Added broadcast_6_piece0 in memory on 192.168.2.86:32844 (size: 3.9 KB, free: 1140.3 MB)
17/04/21 08:13:40 INFO SparkContext: Created broadcast 6 from broadcast at DAGScheduler.scala:1008
17/04/21 08:13:40 INFO DAGScheduler: Submitting 14 missing tasks from ShuffleMapStage 4 (MapPartitionsRDD[17] at repartition at BwaInterpreter.java:281)
17/04/21 08:13:40 INFO YarnClusterScheduler: Adding task set 4.0 with 14 tasks
17/04/21 08:13:40 INFO TaskSetManager: Starting task 0.0 in stage 4.0 (TID 54, slave1.hdp, partition 0,NODE_LOCAL, 2132 bytes)
17/04/21 08:13:40 INFO TaskSetManager: Starting task 1.0 in stage 4.0 (TID 55, slave2.hdp, partition 1,NODE_LOCAL, 2132 bytes)
17/04/21 08:13:40 INFO TaskSetManager: Starting task 2.0 in stage 4.0 (TID 56, slave.hdp, partition 2,NODE_LOCAL, 2132 bytes)
17/04/21 08:13:40 INFO BlockManagerInfo: Added broadcast_6_piece0 in memory on slave1.hdp:38864 (size: 3.9 KB, free: 7.0 GB)
17/04/21 08:13:40 INFO BlockManagerInfo: Added broadcast_6_piece0 in memory on slave2.hdp:46273 (size: 3.9 KB, free: 7.0 GB)
17/04/21 08:13:40 INFO BlockManagerInfo: Added broadcast_6_piece0 in memory on slave.hdp:43602 (size: 3.9 KB, free: 7.0 GB)
17/04/21 08:13:40 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 2 to slave2.hdp:49614
17/04/21 08:13:40 INFO MapOutputTrackerMaster: Size of output statuses for shuffle 2 is 207 bytes
17/04/21 08:13:40 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 2 to slave.hdp:50548
17/04/21 08:13:40 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 2 to slave1.hdp:41948
17/04/21 08:13:47 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 1 to slave.hdp:50548
17/04/21 08:13:47 INFO MapOutputTrackerMaster: Size of output statuses for shuffle 1 is 188 bytes
17/04/21 08:13:48 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 1 to slave1.hdp:41948
17/04/21 08:13:48 INFO MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 1 to slave2.hdp:49614
17/04/21 08:14:04 INFO TaskSetManager: Starting task 3.0 in stage 4.0 (TID 57, slave.hdp, partition 3,NODE_LOCAL, 2132 bytes)
17/04/21 08:14:04 INFO TaskSetManager: Finished task 2.0 in stage 4.0 (TID 56) in 24645 ms on slave.hdp (1/14)
17/04/21 08:14:06 INFO TaskSetManager: Starting task 4.0 in stage 4.0 (TID 58, slave2.hdp, partition 4,NODE_LOCAL, 2132 bytes)
17/04/21 08:14:06 INFO TaskSetManager: Finished task 1.0 in stage 4.0 (TID 55) in 26426 ms on slave2.hdp (2/14)
17/04/21 08:14:07 INFO TaskSetManager: Starting task 5.0 in stage 4.0 (TID 59, slave1.hdp, partition 5,NODE_LOCAL, 2132 bytes)
17/04/21 08:14:07 INFO TaskSetManager: Finished task 0.0 in stage 4.0 (TID 54) in 27388 ms on slave1.hdp (3/14)
17/04/21 08:14:32 INFO TaskSetManager: Starting task 6.0 in stage 4.0 (TID 60, slave.hdp, partition 6,NODE_LOCAL, 2132 bytes)
17/04/21 08:14:32 INFO TaskSetManager: Finished task 3.0 in stage 4.0 (TID 57) in 27494 ms on slave.hdp (4/14)
17/04/21 08:14:38 INFO TaskSetManager: Starting task 7.0 in stage 4.0 (TID 61, slave2.hdp, partition 7,NODE_LOCAL, 2132 bytes)
17/04/21 08:14:38 INFO TaskSetManager: Finished task 4.0 in stage 4.0 (TID 58) in 31786 ms on slave2.hdp (5/14)
17/04/21 08:14:39 INFO TaskSetManager: Starting task 8.0 in stage 4.0 (TID 62, slave1.hdp, partition 8,NODE_LOCAL, 2132 bytes)
17/04/21 08:14:39 INFO TaskSetManager: Finished task 5.0 in stage 4.0 (TID 59) in 31780 ms on slave1.hdp (6/14)
17/04/21 08:14:57 INFO TaskSetManager: Starting task 9.0 in stage 4.0 (TID 63, slave.hdp, partition 9,NODE_LOCAL, 2132 bytes)
17/04/21 08:14:57 INFO TaskSetManager: Finished task 6.0 in stage 4.0 (TID 60) in 25588 ms on slave.hdp (7/14)
17/04/21 08:15:07 INFO TaskSetManager: Starting task 10.0 in stage 4.0 (TID 64, slave2.hdp, partition 10,NODE_LOCAL, 2132 bytes)
17/04/21 08:15:08 INFO TaskSetManager: Finished task 7.0 in stage 4.0 (TID 61) in 29562 ms on slave2.hdp (8/14)
17/04/21 08:15:28 INFO TaskSetManager: Starting task 11.0 in stage 4.0 (TID 65, slave1.hdp, partition 11,NODE_LOCAL, 2132 bytes)
17/04/21 08:15:28 INFO TaskSetManager: Finished task 8.0 in stage 4.0 (TID 62) in 49647 ms on slave1.hdp (9/14)
17/04/21 08:15:35 INFO TaskSetManager: Starting task 12.0 in stage 4.0 (TID 66, slave.hdp, partition 12,NODE_LOCAL, 2132 bytes)
17/04/21 08:15:35 INFO TaskSetManager: Finished task 9.0 in stage 4.0 (TID 63) in 38007 ms on slave.hdp (10/14)
17/04/21 08:15:40 INFO TaskSetManager: Starting task 13.0 in stage 4.0 (TID 67, slave2.hdp, partition 13,NODE_LOCAL, 2132 bytes)
17/04/21 08:15:40 INFO TaskSetManager: Finished task 10.0 in stage 4.0 (TID 64) in 32524 ms on slave2.hdp (11/14)
17/04/21 08:17:17 INFO TaskSetManager: Finished task 13.0 in stage 4.0 (TID 67) in 97403 ms on slave2.hdp (12/14)
17/04/21 08:17:20 INFO TaskSetManager: Finished task 12.0 in stage 4.0 (TID 66) in 104233 ms on slave.hdp (13/14)
17/04/21 08:18:24 INFO TaskSetManager: Finished task 11.0 in stage 4.0 (TID 65) in 175387 ms on slave1.hdp (14/14)
17/04/21 08:18:24 INFO DAGScheduler: ShuffleMapStage 4 (repartition at BwaInterpreter.java:281) finished in 284.201 s
17/04/21 08:18:24 INFO YarnClusterScheduler: Removed TaskSet 4.0, whose tasks have all completed, from pool
17/04/21 08:18:24 INFO DAGScheduler: looking for newly runnable stages
17/04/21 08:18:24 INFO DAGScheduler: running: Set()
17/04/21 08:18:24 INFO DAGScheduler: waiting: Set(ResultStage 5)
17/04/21 08:18:24 INFO DAGScheduler: failed: Set()
17/04/21 08:18:24 INFO DAGScheduler: Submitting ResultStage 5 (MapPartitionsRDD[22] at mapPartitionsWithIndex at BwaInterpreter.java:304), which has no missing parents
17/04/21 08:18:24 ERROR LiveListenerBus: Listener EventLoggingListener threw an exception
java.lang.reflect.InvocationTargetException
at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:150)
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:150)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.EventLoggingListener.logEvent(EventLoggingListener.scala:150)
at org.apache.spark.scheduler.EventLoggingListener.onStageCompleted(EventLoggingListener.scala:170)
at org.apache.spark.scheduler.SparkListenerBus$class.onPostEvent(SparkListenerBus.scala:32)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:55)
at org.apache.spark.util.AsynchronousListenerBus.postToAll(AsynchronousListenerBus.scala:37)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(AsynchronousListenerBus.scala:80)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:65)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(AsynchronousListenerBus.scala:65)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(AsynchronousListenerBus.scala:64)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1181)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1.run(AsynchronousListenerBus.scala:63)
Caused by: java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:818)
at org.apache.hadoop.hdfs.DFSOutputStream.flushOrSync(DFSOutputStream.java:2037)
at org.apache.hadoop.hdfs.DFSOutputStream.hflush(DFSOutputStream.java:1983)
at org.apache.hadoop.fs.FSDataOutputStream.hflush(FSDataOutputStream.java:130)
... 20 more

This is my running command
/usr/bin/spark-submit --class com.github.sparkbwa.SparkBWA --master yarn-cluster --driver-memory 2g --executor-memory 10g --executor-cores 1 --verbose --num-executors 32 /home/SparkBWA/target/SparkBWA-0.2.jar -m -r -p --index hdfs://master.hdp:8020/Data/HumanBase/hg19.fa -n 32 -w "-R @rg\tID:foo\tLB:bar\tPL:illumina\tPU:illumina\tSM:ERR000589" hdfs://master.hdp:8020/SparkBWA/ERR000589_1.filt.fastq hdfs://master.hdp:8020/SparkBWA/ERR000589_2.filt.fastq hdfs://master.hdp:8020/sample/output
@salimbakker

Write

yarn-cluster mode actually taking a singl node and core

Hi,
I am using below command to run the application on yarn-cluster mode:
$SPARK_HOME/bin/spark-submit --class SparkBWA --master yarn-cluster --driver-memory 8G --executor-memory 6G --executor-cores 6 --verbose --num-executors 6 SparkBWA.jar -algorithm mem -reads paired -index /Data/HumanBase/HG -partitions 16 /user/hadoop/ERR000030_HG1.fastq /user/hadoop/ERR000030_HG2.fastq /user/hadoop/op_hg_cluster_mem
As per the parameter passed the schedular should utilize the different nodes and cores available in the clutser.
But as shown in the attached cluster information on UI
cluster_details
its taking taking single node and 1 core only. As my data size this time is approx 5GB(READ and Index ) hence on terminal displaying logs as:

16/06/28 12:30:07 INFO yarn.Client: Application report for application_1467096309474_0001 (state: RUNNING)
16/06/28 12:30:08 INFO yarn.Client: Application report for application_1467096309474_0001 (state: RUNNING)
16/06/28 12:30:09 INFO yarn.Client: Application report for application_1467096309474_0001 (state: RUNNING)
16/06/28 12:30:10 INFO yarn.Client: Application report for application_1467096309474_0001 (state: RUNNING),

Some kind of Non-ending loop.

Running with master mode error:File does not exist

shell:


spark-submit --class SparkBWA \
--master  spark://Master:7077 \
--conf "spark.executor.extraJavaOptions=-Djava.library.path=/home/hadoop/xubo/tools/SparkBWA/build" \
--driver-java-options "-Djava.library.path=/home/hadoop/xubo/tools/SparkBWA/build" \
SparkBWA.jar \
-r \
-algorithm mem -reads paired \
-index /home/hadoop/xubo/ref/GRCH38L1Index/GRCH38chr1L3556522.fasta \
-partitions 3 \
/xubo/alignment/sparkBWA/GRCH38chr1L3556522N10L50paired1.fastq /xubo/alignment/sparkBWA/GRCH38chr1L3556522N10L50paired2.fastq \
sparkBWA/datatestLocalGRCH38chr1L3556522N10L50paired12Master3Mcnode6

error:

ERROR BwaInterpreter: java.io.FileNotFoundException: File does not exist:

code:

LOG.info("JMAbuin:: SparkBWA :: Returned file ::"+returnedValues.get(i));
BufferedReader br=new BufferedReader(new InputStreamReader(fs.open(new Path(returnedValues.get(i)))));

there are a result file:FullOutput.sam,but null
Is the definite error?:FileSystem fs = FileSystem.get(conf);
running with local mode and yarn-client is correct,but master mode is error.

log:


q-3-NoSort-app-20160624220707-0114-0.sam
java.io.FileNotFoundException: File does not exist: /user/hadoop/sparkBWA/datatestLocalGRCH38chr1L3556522N10L50paired12Master3Mcnode6/SparkBWA_GRCH38chr1L3556522N10L50paired1.fastq-3-NoSort-app-20160624220707-0114-0.sam
    at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:66)
    at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:56)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1891)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1832)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1812)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1784)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:542)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:362)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
    at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
    at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1222)
    at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1210)
    at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:1200)
    at org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:271)
    at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:238)
    at org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:231)
    at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1498)
    at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:302)
    at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:298)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:298)
    at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:766)
    at BwaRDD.MapBwa(BwaRDD.java:125)
    at BwaInterpreter.RunBwa(BwaInterpreter.java:437)
    at SparkBWA.main(SparkBWA.java:30)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:674)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File does not exist: /user/hadoop/sparkBWA/datatestLocalGRCH38chr1L3556522N10L50paired12Master3Mcnode6/SparkBWA_GRCH38chr1L3556522N10L50paired1.fastq-3-NoSort-app-20160624220707-0114-0.sam
    at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:66)
    at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:56)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1891)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1832)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1812)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1784)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:542)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:362)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

    at org.apache.hadoop.ipc.Client.call(Client.java:1468)
    at org.apache.hadoop.ipc.Client.call(Client.java:1399)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at com.sun.proxy.$Proxy12.getBlockLocations(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:254)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy13.getBlockLocations(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:1220)
    ... 23 more
16/06/24 22:07:11 ERROR BwaInterpreter: java.io.FileNotFoundException: File does not exist: /user/hadoop/sparkBWA/datatestLocalGRCH38chr1L3556522N10L50paired12Master3Mcnode6/SparkBWA_GRCH38chr1L3556522N10L50paired1.fastq-3-NoSort-app-20160624220707-0114-0.sam
    at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:66)
    at org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:56)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1891)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1832)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1812)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1784)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:542)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:362)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

16/06/24 22:07:11 INFO spark.SparkContext: Invoking stop() from shutdown hook

ERROR LiveListenerBus: Listener EventLoggingListener threw an exception

I got something wrong.Anyone can help me。 Thanks!

my command: nohup spark-submit --class SparkBWA --master yarn-client --driver-memory 6000m --executor-memory 6000m --executor-cores 1 --archives /share/data/wangzhen/project/2016/spark_alignment/SparkBWA/build/bwa.zip --verbose --num-executors 1 /share/data/wangzhen/project/2016/spark_alignment/SparkBWA/build/SparkBWA.jar -algorithm mem -reads paired -index /share/data/wangzhen/project/2016/spark_alignment/SparkBWA/test/database/16s_refseq.fna.formatted -partitions 1 /wangzhen/sparkBWA/10_1.fq /wangzhen/sparkBWA/10_2.fq /wangzhen/sparkBWA/result

The error:
Using properties file: /usr/lib/spark151/conf/spark-defaults.conf
Adding default property: spark.port.maxRetries=40
Adding default property: spark.akka.timeout=300
Adding default property: spark.serializer=org.apache.spark.serializer.KryoSerializer
Adding default property: spark.executor.extraJavaOptions=-Djava.library.path=/share/data/wangzhen/project/2016/spark_alignment/SparkBWA/build/bwa.zip
Adding default property: spark.eventLog.enabled=true
Adding default property: spark.driver.maxResultSize=6g
Adding default property: spark.worker.cleanup.appDataTtl=24*3600
Adding default property: spark.network.timeout=300
Adding default property: spark.storage.memoryFraction=0.8
Adding default property: spark.driver.memory=8g
Adding default property: spark.default.parallelism=48
Adding default property: spark.shuffle.spill=true
Adding default property: spark.master=spark://master:7077
Adding default property: spark.shuffle.file.buffer=96k
Adding default property: spark.local.dir=/data/sparkTmp
Adding default property: spark.eventLog.dir=hdfs://master:9000/sparkHistoryLog
Adding default property: spark.worker.timeout=120
Adding default property: spark.eventLog.compress=true
Adding default property: spark.task.cpus=1
Adding default property: spark.shuffle.consolidateFiles=true
Adding default property: spark.task.maxFailures=8
Parsed arguments:
master yarn-client
deployMode null
executorMemory 6000m
executorCores 1
totalExecutorCores null
propertiesFile /usr/lib/spark151/conf/spark-defaults.conf
driverMemory 6000m
driverCores null
driverExtraClassPath null
driverExtraLibraryPath null
driverExtraJavaOptions null
supervise false
queue null
numExecutors 1
files null
pyFiles null
archives file:/share/data/wangzhen/project/2016/spark_alignment/SparkBWA/build/bwa.zip
mainClass SparkBWA
primaryResource file:/share/data/wangzhen/project/2016/spark_alignment/SparkBWA/build/SparkBWA.jar
name SparkBWA
childArgs [-algorithm mem -reads paired -index /share/data/wangzhen/project/2016/spark_alignment/SparkBWA/test/database/16s_refseq.fna.formatted -partitions 1 /wangzhen/sparkBWA/10_1.fq /wangzhen/sparkBWA/10_2.fq /wangzhen/sparkBWA/result]
jars null
packages null
packagesExclusions null
repositories null
verbose true

Spark properties used, including those specified through
--conf and those from the properties file /usr/lib/spark151/conf/spark-defaults.conf:
spark.local.dir -> /data/sparkTmp
spark.default.parallelism -> 48
spark.driver.memory -> 6000m
spark.network.timeout -> 300
spark.worker.cleanup.appDataTtl -> 24*3600
spark.eventLog.compress -> true
spark.worker.timeout -> 120
spark.eventLog.enabled -> true
spark.akka.timeout -> 300
spark.shuffle.consolidateFiles -> true
spark.serializer -> org.apache.spark.serializer.KryoSerializer
spark.task.cpus -> 1
spark.executor.extraJavaOptions -> -Djava.library.path=/share/data/wangzhen/project/2016/spark_alignment/SparkBWA/build/bwa.zip
spark.shuffle.spill -> true
spark.task.maxFailures -> 8
spark.eventLog.dir -> hdfs://master:9000/sparkHistoryLog
spark.master -> spark://master:7077
spark.driver.maxResultSize -> 6g
spark.port.maxRetries -> 40
spark.storage.memoryFraction -> 0.8
spark.shuffle.file.buffer -> 96k

Main class:
SparkBWA
Arguments:
-algorithm
mem
-reads
paired
-index
/share/data/wangzhen/project/2016/spark_alignment/SparkBWA/test/database/16s_refseq.fna.formatted
-partitions
1
/wangzhen/sparkBWA/10_1.fq
/wangzhen/sparkBWA/10_2.fq
/wangzhen/sparkBWA/result
System properties:
spark.local.dir -> /data/sparkTmp
spark.default.parallelism -> 48
spark.driver.memory -> 6000m
spark.network.timeout -> 300
spark.executor.memory -> 6000m
spark.executor.instances -> 1
spark.worker.cleanup.appDataTtl -> 24*3600
spark.eventLog.compress -> true
spark.worker.timeout -> 120
spark.eventLog.enabled -> true
SPARK_SUBMIT -> true
spark.akka.timeout -> 300
spark.shuffle.consolidateFiles -> true
spark.serializer -> org.apache.spark.serializer.KryoSerializer
spark.task.cpus -> 1
spark.executor.extraJavaOptions -> -Djava.library.path=/share/data/wangzhen/project/2016/spark_alignment/SparkBWA/build/bwa.zip
spark.app.name -> SparkBWA
spark.shuffle.spill -> true
spark.jars -> file:/share/data/wangzhen/project/2016/spark_alignment/SparkBWA/build/SparkBWA.jar
spark.task.maxFailures -> 8
spark.yarn.dist.archives -> file:/share/data/wangzhen/project/2016/spark_alignment/SparkBWA/build/bwa.zip
spark.submit.deployMode -> client
spark.eventLog.dir -> hdfs://master:9000/sparkHistoryLog
spark.driver.maxResultSize -> 6g
spark.master -> yarn-client
spark.port.maxRetries -> 40
spark.executor.cores -> 1
spark.shuffle.file.buffer -> 96k
spark.storage.memoryFraction -> 0.8
Classpath elements:
file:/share/data/wangzhen/project/2016/spark_alignment/SparkBWA/build/SparkBWA.jar

16/07/13 10:54:39 INFO BwaOptions: JMAbuin:: Received argument: -algorithm
16/07/13 10:54:39 INFO BwaOptions: JMAbuin:: Received argument: mem
16/07/13 10:54:39 INFO BwaOptions: JMAbuin:: Received argument: -reads
16/07/13 10:54:39 INFO BwaOptions: JMAbuin:: Received argument: paired
16/07/13 10:54:39 INFO BwaOptions: JMAbuin:: Received argument: -index
16/07/13 10:54:39 INFO BwaOptions: JMAbuin:: Received argument: /share/data/wangzhen/project/2016/spark_alignment/SparkBWA/test/database/16s_refseq.fna.formatted
16/07/13 10:54:39 INFO BwaOptions: JMAbuin:: Received argument: -partitions
16/07/13 10:54:39 INFO BwaOptions: JMAbuin:: Received argument: 1
16/07/13 10:54:39 INFO BwaOptions: JMAbuin:: Received argument: /wangzhen/sparkBWA/10_1.fq
16/07/13 10:54:39 INFO BwaOptions: JMAbuin:: Received argument: /wangzhen/sparkBWA/10_2.fq
16/07/13 10:54:39 INFO BwaOptions: JMAbuin:: Received argument: /wangzhen/sparkBWA/result
16/07/13 10:54:39 INFO SparkContext: Running Spark version 1.5.1
16/07/13 10:54:39 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/07/13 10:54:39 WARN SparkConf: In Spark 1.0 and later spark.local.dir will be overridden by the value set by the cluster manager (via SPARK_LOCAL_DIRS in mesos/standalone and LOCAL_DIRS in YARN).
16/07/13 10:54:39 INFO SecurityManager: Changing view acls to: root
16/07/13 10:54:39 INFO SecurityManager: Changing modify acls to: root
16/07/13 10:54:39 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
16/07/13 10:54:40 INFO Slf4jLogger: Slf4jLogger started
16/07/13 10:54:40 INFO Remoting: Starting remoting
16/07/13 10:54:40 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:33949]
16/07/13 10:54:40 INFO Utils: Successfully started service 'sparkDriver' on port 33949.
16/07/13 10:54:40 INFO SparkEnv: Registering MapOutputTracker
16/07/13 10:54:40 INFO SparkEnv: Registering BlockManagerMaster
16/07/13 10:54:41 INFO DiskBlockManager: Created local directory at /data/sparkTmp/blockmgr-18818f3c-9076-43fa-affa-4018282f974b
16/07/13 10:54:41 INFO MemoryStore: MemoryStore started with capacity 4.0 GB
16/07/13 10:54:41 INFO HttpFileServer: HTTP File server directory is /data/sparkTmp/spark-0eb513a2-e2e1-4c8d-b1c2-044d7211cfc0/httpd-61146258-69d4-4c7f-9dda-337ec907943b
16/07/13 10:54:41 INFO HttpServer: Starting HTTP Server
16/07/13 10:54:41 INFO Utils: Successfully started service 'HTTP file server' on port 40187.
16/07/13 10:54:41 INFO SparkEnv: Registering OutputCommitCoordinator
16/07/13 10:54:41 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/07/13 10:54:41 INFO SparkUI: Started SparkUI at http://192.168.100.17:4040
16/07/13 10:54:41 INFO SparkContext: Added JAR file:/share/data/wangzhen/project/2016/spark_alignment/SparkBWA/build/SparkBWA.jar at http://192.168.100.17:40187/jars/SparkBWA.jar with timestamp 1468378481570
16/07/13 10:54:41 WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
16/07/13 10:54:41 INFO RMProxy: Connecting to ResourceManager at master/192.168.100.17:8032
16/07/13 10:54:41 INFO Client: Requesting a new application from cluster with 4 NodeManagers
16/07/13 10:54:42 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
16/07/13 10:54:42 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
16/07/13 10:54:42 INFO Client: Setting up container launch context for our AM
16/07/13 10:54:42 INFO Client: Setting up the launch environment for our AM container
16/07/13 10:54:42 INFO Client: Preparing resources for our AM container
16/07/13 10:54:42 INFO Client: Uploading resource file:/usr/lib/spark151/lib/spark-assembly-1.5.1-hadoop2.6.0.jar -> hdfs://master:9000/user/root/.sparkStaging/application_1468374952451_0011/spark-assembly-1.5.1-hadoop2.6.0.jar
16/07/13 10:54:50 INFO Client: Uploading resource file:/share/data/wangzhen/project/2016/spark_alignment/SparkBWA/build/bwa.zip -> hdfs://master:9000/user/root/.sparkStaging/application_1468374952451_0011/bwa.zip
16/07/13 10:54:50 INFO Client: Uploading resource file:/data/sparkTmp/spark-0eb513a2-e2e1-4c8d-b1c2-044d7211cfc0/__spark_conf__3121139071193347484.zip -> hdfs://master:9000/user/root/.sparkStaging/application_1468374952451_0011/__spark_conf__3121139071193347484.zip
16/07/13 10:54:50 INFO SecurityManager: Changing view acls to: root
16/07/13 10:54:50 INFO SecurityManager: Changing modify acls to: root
16/07/13 10:54:50 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
16/07/13 10:54:50 INFO Client: Submitting application 11 to ResourceManager
16/07/13 10:54:50 INFO YarnClientImpl: Submitted application application_1468374952451_0011
16/07/13 10:54:51 INFO Client: Application report for application_1468374952451_0011 (state: ACCEPTED)
16/07/13 10:54:51 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1468378490370
final status: UNDEFINED
tracking URL: http://master:8089/proxy/application_1468374952451_0011/
user: root
16/07/13 10:54:52 INFO Client: Application report for application_1468374952451_0011 (state: ACCEPTED)
16/07/13 10:54:53 INFO Client: Application report for application_1468374952451_0011 (state: ACCEPTED)
16/07/13 10:54:54 INFO Client: Application report for application_1468374952451_0011 (state: ACCEPTED)
16/07/13 10:54:55 INFO Client: Application report for application_1468374952451_0011 (state: ACCEPTED)
16/07/13 10:54:56 INFO Client: Application report for application_1468374952451_0011 (state: ACCEPTED)
16/07/13 10:54:57 INFO YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as AkkaRpcEndpointRef(Actor[akka.tcp://[email protected]:40327/user/YarnAM#-347527355])
16/07/13 10:54:57 INFO YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> master, PROXY_URI_BASES -> http://master:8089/proxy/application_1468374952451_0011), /proxy/application_1468374952451_0011
16/07/13 10:54:57 INFO JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
16/07/13 10:54:57 INFO Client: Application report for application_1468374952451_0011 (state: RUNNING)
16/07/13 10:54:57 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: 192.168.100.22
ApplicationMaster RPC port: 0
queue: default
start time: 1468378490370
final status: UNDEFINED
tracking URL: http://master:8089/proxy/application_1468374952451_0011/
user: root
16/07/13 10:54:57 INFO YarnClientSchedulerBackend: Application application_1468374952451_0011 has started running.
16/07/13 10:54:57 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 42843.
16/07/13 10:54:57 INFO NettyBlockTransferService: Server created on 42843
16/07/13 10:54:57 INFO BlockManagerMaster: Trying to register BlockManager
16/07/13 10:54:57 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.100.17:42843 with 4.0 GB RAM, BlockManagerId(driver, 192.168.100.17, 42843)
16/07/13 10:54:57 INFO BlockManagerMaster: Registered BlockManager
16/07/13 10:54:57 INFO EventLoggingListener: Logging events to hdfs://master:9000/sparkHistoryLog/application_1468374952451_0011.snappy
16/07/13 10:55:07 INFO YarnClientSchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@slave2:37212/user/Executor#574201963]) with ID 1
16/07/13 10:55:07 INFO YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8
16/07/13 10:55:07 INFO BwaInterpreter: JMAbuin:: Starting BWA
16/07/13 10:55:07 INFO BwaInterpreter: JMAbuin::Not sorting in HDFS. Timing: 4690973152386
16/07/13 10:55:07 INFO BlockManagerMasterEndpoint: Registering block manager slave2:46841 with 4.0 GB RAM, BlockManagerId(1, slave2, 46841)
16/07/13 10:55:07 ERROR LiveListenerBus: Listener EventLoggingListener threw an exception
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:148)
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:148)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.EventLoggingListener.logEvent(EventLoggingListener.scala:148)
at org.apache.spark.scheduler.EventLoggingListener.onBlockManagerAdded(EventLoggingListener.scala:176)
at org.apache.spark.scheduler.SparkListenerBus$class.onPostEvent(SparkListenerBus.scala:46)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:56)
at org.apache.spark.util.AsynchronousListenerBus.postToAll(AsynchronousListenerBus.scala:37)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(AsynchronousListenerBus.scala:79)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1136)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1.run(AsynchronousListenerBus.scala:63)
Caused by: java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:795)
at org.apache.hadoop.hdfs.DFSOutputStream.flushOrSync(DFSOutputStream.java:1985)
at org.apache.hadoop.hdfs.DFSOutputStream.hflush(DFSOutputStream.java:1946)
at org.apache.hadoop.fs.FSDataOutputStream.hflush(FSDataOutputStream.java:130)
... 17 more
16/07/13 10:55:08 INFO MemoryStore: ensureFreeSpace(234648) called with curMem=0, maxMem=4341104640
16/07/13 10:55:08 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 229.1 KB, free 4.0 GB)
16/07/13 10:55:08 INFO MemoryStore: ensureFreeSpace(20248) called with curMem=234648, maxMem=4341104640
16/07/13 10:55:08 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 19.8 KB, free 4.0 GB)
16/07/13 10:55:08 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.100.17:42843 (size: 19.8 KB, free: 4.0 GB)
16/07/13 10:55:08 INFO SparkContext: Created broadcast 0 from newAPIHadoopFile at BwaInterpreter.java:246
16/07/13 10:55:08 INFO MemoryStore: ensureFreeSpace(234648) called with curMem=254896, maxMem=4341104640
16/07/13 10:55:08 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 229.1 KB, free 4.0 GB)
16/07/13 10:55:08 INFO MemoryStore: ensureFreeSpace(20248) called with curMem=489544, maxMem=4341104640
16/07/13 10:55:08 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 19.8 KB, free 4.0 GB)
16/07/13 10:55:08 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.100.17:42843 (size: 19.8 KB, free: 4.0 GB)
16/07/13 10:55:08 INFO SparkContext: Created broadcast 1 from newAPIHadoopFile at BwaInterpreter.java:247
16/07/13 10:55:08 INFO FileInputFormat: Total input paths to process : 1
16/07/13 10:55:08 INFO FileInputFormat: Total input paths to process : 1
16/07/13 10:55:08 INFO NewHadoopRDD: Removing RDD 0 from persistence list
16/07/13 10:55:08 INFO BlockManager: Removing RDD 0
16/07/13 10:55:08 INFO NewHadoopRDD: Removing RDD 1 from persistence list
16/07/13 10:55:08 INFO BlockManager: Removing RDD 1
16/07/13 10:55:08 ERROR LiveListenerBus: Listener EventLoggingListener threw an exception
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:148)
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$3.apply(EventLoggingListener.scala:148)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.EventLoggingListener.logEvent(EventLoggingListener.scala:148)
at org.apache.spark.scheduler.EventLoggingListener.onUnpersistRDD(EventLoggingListener.scala:184)
at org.apache.spark.scheduler.SparkListenerBus$class.onPostEvent(SparkListenerBus.scala:50)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:56)
at org.apache.spark.util.AsynchronousListenerBus.postToAll(AsynchronousListenerBus.scala:37)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(AsynchronousListenerBus.scala:79)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1136)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1.run(AsynchronousListenerBus.scala:63)
Caused by: java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:795)
at org.apache.hadoop.hdfs.DFSOutputStream.flushOrSync(DFSOutputStream.java:1985)
at org.apache.hadoop.hdfs.DFSOutputStream.hflush(DFSOutputStream.java:1946)
at org.apache.hadoop.fs.FSDataOutputStream.hflush(FSDataOutputStream.java:130)
... 17 more
16/07/13 10:55:08 INFO BwaInterpreter: JMAbuin:: No sort with partitioning
16/07/13 10:55:08 ERROR LiveListenerBus: Listener EventLoggingListener threw an exception

Help needed to run SparkBWA without Hadoop

Hi, Is there any intention to enable SparkBWA to run with only Spark's native RDD and native scheduler without going through the hassle of HDFS. The latter needs a Hadoop running and that could be a huge hiccup if one wants to run it under Torque for example. In contrast, running the Spark local scheduler under PBS/Torque is an easy and smooth process since it does not require to have Hadoop running on each compute node in a HPC cluster setup.

Basically, if someone could give me few hints how to save the output to a file without first transferring the RDDs to HDFS and writing the final SAM there.

I am not a Java person (read zero knowledge) but what I can gather the 4 files to be modified are Bwa.java, BwaOptions.java, BwaAlignementBase.java (after around lines 135), BwaInterpreter.java.

Greatly appreciated.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.