Git Product home page Git Product logo

Comments (29)

jmabuin avatar jmabuin commented on August 20, 2024 1

Keep in mind that a normal user having access to all computing nodes to copy a shared library is not a common option. That is why SparkBWA uses the Haddop distributed cache to send the bwa.zip file to all nodes and unconmpress it with the shared library.

If you copy the library to all nodes, of couse it is going to work, but it is not a best practices approach.

from sparkbwa.

jmabuin avatar jmabuin commented on August 20, 2024

Have you configured Spark according https://github.com/citiususc/SparkBWA#configuring-spark ??

from sparkbwa.

xubo245 avatar xubo245 commented on August 20, 2024

I have configured in master before,and run with master local[4].

from sparkbwa.

xubo245 avatar xubo245 commented on August 20, 2024

Today I try to run with yarn-client,but faild .I have configured Spark in every node, and copy SparkBWA to every node.

error log:


16/06/20 18:39:21 ERROR TaskSetManager: Task 0 in stage 2.0 failed 4 times; aborting job
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 2.0 failed 4 times, most recent failure: Lost task 0.3 in stage 2.0 (TID 5, Mcnode4): java.lang.NoClassDefFoundError: Could not initialize class BwaJni
    at Bwa.run(Bwa.java:443)
    at BwaRDD$BwaAlignment.call(BwaRDD.java:283)
    at BwaRDD$BwaAlignment.call(BwaRDD.java:173)
    at org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction2$1.apply(JavaPairRDD.scala:1024)
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$mapPartitionsWithIndex$1.apply(JavaRDDLike.scala:102)
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$mapPartitionsWithIndex$1.apply(JavaRDDLike.scala:102)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$18.apply(RDD.scala:727)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$18.apply(RDD.scala:727)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:88)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1271)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1270)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1270)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
    at scala.Option.foreach(Option.scala:236)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1496)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1458)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1447)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1824)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1837)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1850)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1921)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:909)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)
    at org.apache.spark.rdd.RDD.collect(RDD.scala:908)
    at org.apache.spark.api.java.JavaRDDLike$class.collect(JavaRDDLike.scala:338)
    at org.apache.spark.api.java.AbstractJavaRDDLike.collect(JavaRDDLike.scala:47)
    at BwaRDD.MapBwa(BwaRDD.java:108)
    at BwaInterpreter.RunBwa(BwaInterpreter.java:437)
    at SparkBWA.main(SparkBWA.java:30)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:674)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.NoClassDefFoundError: Could not initialize class BwaJni
    at Bwa.run(Bwa.java:443)
    at BwaRDD$BwaAlignment.call(BwaRDD.java:283)
    at BwaRDD$BwaAlignment.call(BwaRDD.java:173)
    at org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction2$1.apply(JavaPairRDD.scala:1024)
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$mapPartitionsWithIndex$1.apply(JavaRDDLike.scala:102)
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$mapPartitionsWithIndex$1.apply(JavaRDDLike.scala:102)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$18.apply(RDD.scala:727)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$18.apply(RDD.scala:727)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:88)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

from sparkbwa.

jmabuin avatar jmabuin commented on August 20, 2024

According to configuration guide you have to configure the option spark.executor.extraJavaOptions -Djava.library.path=./bwa.zip
This option is to tell Spark where to look for native libraries. In this case, we want to look for libbwa.so, which is inside the file bwa.zip.
The file bwa.zip is delivered to each node by using --archives bwa.zip when the user launchs the program, so, my guess is that you are launching the jar from other location and not from the build dir created when compiling SparkBWA. If this is the case, you have two options:

  • Launch the program from the build dir
  • Copy the file bwa.zip to the location where you have SparkBWA.jar

from sparkbwa.

xubo245 avatar xubo245 commented on August 20, 2024

I have configured in spark-default.conf with:

spark.executor.extraJavaOptions -Djava.library.path=/home/hadoop/xubo/tools/SparkBWA/build/bwa.zip
spark.yarn.executor.memoryOverhead  1704

and run with :
--archives bwa.zip \
and have copy bwa in run.sh localtion

and print by you program:
spark.yarn.dist.archives -> file:/home/hadoop/xubo/project/alignment/sparkBWA/bwa.zip
the file is here

hadoop@Master:~/xubo/project/alignment/sparkBWA$ ll /home/hadoop/xubo/project/alignment/sparkBWA/bwa.zip
-rw-rw-r-- 1 hadoop hadoop 1244800  6月 19 21:44 /home/hadoop/xubo/project/alignment/sparkBWA/bwa.zip

I can find the faild reason... Can you help me find the problem?

I will try Launch the program from the build dir later.

from sparkbwa.

xubo245 avatar xubo245 commented on August 20, 2024

the error log of running by yarn-client:

hadoop@Master:~/xubo/project/alignment/sparkBWA$ ./paired.sh 
Using properties file: /home/hadoop/cloud/spark-1.5.2/conf/spark-defaults.conf
Adding default property: spark.executor.extraJavaOptions=-Djava.library.path=/home/hadoop/xubo/tools/SparkBWA/build/bwa.zip
Adding default property: spark.eventLog.enabled=true
Adding default property: spark.eventLog.dir=file:///home/hadoop/Downloads/hangc/sparklog
Adding default property: spark.eventLog.compress=true
Adding default property: spark.yarn.executor.memoryOverhead=1704
Parsed arguments:
  master                  yarn-client
  deployMode              null
  executorMemory          1500m
  executorCores           1
  totalExecutorCores      null
  propertiesFile          /home/hadoop/cloud/spark-1.5.2/conf/spark-defaults.conf
  driverMemory            1500m
  driverCores             null
  driverExtraClassPath    null
  driverExtraLibraryPath  null
  driverExtraJavaOptions  null
  supervise               false
  queue                   null
  numExecutors            32
  files                   null
  pyFiles                 null
  archives                file:/home/hadoop/xubo/project/alignment/sparkBWA/bwa.zip
  mainClass               SparkBWA
  primaryResource         file:/home/hadoop/xubo/project/alignment/sparkBWA/SparkBWA.jar
  name                    SparkBWA
  childArgs               [-algorithm mem -reads paired -index /home/hadoop/xubo/project/alignment/sparkBWA/index/datatest.fasta /xubo/alignment/bwa/datatest.fq /xubo/alignment/bwa/datatest.fq /xubo/alignment/output/sparkBWA/datatest4]
  jars                    null
  packages                null
  packagesExclusions      null
  repositories            null
  verbose                 true

Spark properties used, including those specified through
 --conf and those from the properties file /home/hadoop/cloud/spark-1.5.2/conf/spark-defaults.conf:
  spark.driver.memory -> 1500m
  spark.eventLog.enabled -> true
  spark.eventLog.compress -> true
  spark.executor.extraJavaOptions -> -Djava.library.path=/home/hadoop/xubo/tools/SparkBWA/build/bwa.zip
  spark.yarn.executor.memoryOverhead -> 1704
  spark.eventLog.dir -> file:///home/hadoop/Downloads/hangc/sparklog


Main class:
SparkBWA
Arguments:
-algorithm
mem
-reads
paired
-index
/home/hadoop/xubo/project/alignment/sparkBWA/index/datatest.fasta
/xubo/alignment/bwa/datatest.fq
/xubo/alignment/bwa/datatest.fq
/xubo/alignment/output/sparkBWA/datatest4
System properties:
spark.driver.memory -> 1500m
spark.executor.memory -> 1500m
spark.executor.instances -> 32
spark.eventLog.enabled -> true
spark.eventLog.compress -> true
SPARK_SUBMIT -> true
spark.executor.extraJavaOptions -> -Djava.library.path=/home/hadoop/xubo/tools/SparkBWA/build/bwa.zip
spark.app.name -> SparkBWA
spark.yarn.executor.memoryOverhead -> 1704
spark.jars -> file:/home/hadoop/xubo/project/alignment/sparkBWA/SparkBWA.jar
spark.submit.deployMode -> client
spark.yarn.dist.archives -> file:/home/hadoop/xubo/project/alignment/sparkBWA/bwa.zip
spark.eventLog.dir -> file:///home/hadoop/Downloads/hangc/sparklog
spark.master -> yarn-client
spark.executor.cores -> 1
Classpath elements:
file:/home/hadoop/xubo/project/alignment/sparkBWA/SparkBWA.jar


16/06/20 18:39:21 ERROR TaskSetManager: Task 0 in stage 2.0 failed 4 times; aborting job
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 2.0 failed 4 times, most recent failure: Lost task 0.3 in stage 2.0 (TID 5, Mcnode4): java.lang.NoClassDefFoundError: Could not initialize class BwaJni
    at Bwa.run(Bwa.java:443)
    at BwaRDD$BwaAlignment.call(BwaRDD.java:283)
    at BwaRDD$BwaAlignment.call(BwaRDD.java:173)
    at org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction2$1.apply(JavaPairRDD.scala:1024)
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$mapPartitionsWithIndex$1.apply(JavaRDDLike.scala:102)
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$mapPartitionsWithIndex$1.apply(JavaRDDLike.scala:102)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$18.apply(RDD.scala:727)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$18.apply(RDD.scala:727)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:88)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1271)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1270)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1270)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
    at scala.Option.foreach(Option.scala:236)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1496)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1458)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1447)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1824)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1837)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1850)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1921)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:909)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)
    at org.apache.spark.rdd.RDD.collect(RDD.scala:908)
    at org.apache.spark.api.java.JavaRDDLike$class.collect(JavaRDDLike.scala:338)
    at org.apache.spark.api.java.AbstractJavaRDDLike.collect(JavaRDDLike.scala:47)
    at BwaRDD.MapBwa(BwaRDD.java:108)
    at BwaInterpreter.RunBwa(BwaInterpreter.java:437)
    at SparkBWA.main(SparkBWA.java:30)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:674)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.NoClassDefFoundError: Could not initialize class BwaJni
    at Bwa.run(Bwa.java:443)
    at BwaRDD$BwaAlignment.call(BwaRDD.java:283)
    at BwaRDD$BwaAlignment.call(BwaRDD.java:173)
    at org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction2$1.apply(JavaPairRDD.scala:1024)
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$mapPartitionsWithIndex$1.apply(JavaRDDLike.scala:102)
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$mapPartitionsWithIndex$1.apply(JavaRDDLike.scala:102)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$18.apply(RDD.scala:727)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$18.apply(RDD.scala:727)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:88)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
hadoop@Master:~/xubo/project/alignment/sparkBWA$ ls /home/hadoop/xubo/project/alignment/sparkBWA/bwa.zip
/home/hadoop/xubo/project/alignment/sparkBWA/bwa.zip
hadoop@Master:~/xubo/project/alignment/sparkBWA$ ll /home/hadoop/xubo/project/alignment/sparkBWA/bwa.zip
-rw-rw-r-- 1 hadoop hadoop 1244800  6月 19 21:44 /home/hadoop/xubo/project/alignment/sparkBWA/bwa.zip
hadoop@Master:~/xubo/project/alignment/sparkBWA$ 

from sparkbwa.

jmabuin avatar jmabuin commented on August 20, 2024

Your spark.yarn.executor.memoryOverhead 1704 is too low. This parameter is the memory that BWA needs to run, which generally is about 6GB when using a human index, and higher when using BWA with threads.

from sparkbwa.

xubo245 avatar xubo245 commented on August 20, 2024

my node total memory is 6 GB.
So I run with datatest data,not human index, datatest data only serveral MB

from sparkbwa.

xubo245 avatar xubo245 commented on August 20, 2024

In BWA report by Hengli,running bwa in human index need about 6.6GB In peak
I run SparkBWA with human index and:

the /xubo/alignment/sparkBWA/ERR000589_1.filt.fastq /xubo/alignment/sparkBWA/ERR000589_2.filt.fastq \
/xubo/alignment/output/sparkBWA/datatestERR

the error is the same with datatest。

Now error is:
Caused by: java.lang.NoClassDefFoundError: Could not initialize class BwaJni

Have you the problem before?
Thanks you

from sparkbwa.

xubo245 avatar xubo245 commented on August 20, 2024

I just run in build dir, the error is different:
First time:
java.lang.UnsatisfiedLinkError: no bwa in java.library.path
and the second time is :
ERROR YarnScheduler: Lost executor 9 on Mcnode3: remote Rpc client disassociated

error log :


hadoop@Master:~/xubo/tools/SparkBWA/build$ ./paired.sh 
Using properties file: /home/hadoop/cloud/spark-1.5.2/conf/spark-defaults.conf
Adding default property: spark.executor.extraJavaOptions=-Djava.library.path=/home/hadoop/xubo/tools/SparkBWA/build/bwa.zip
Adding default property: spark.eventLog.enabled=true
Adding default property: spark.eventLog.dir=file:///home/hadoop/Downloads/hangc/sparklog
Adding default property: spark.eventLog.compress=true
Adding default property: spark.yarn.executor.memoryOverhead=1704
Parsed arguments:
  master                  yarn-client
  deployMode              null
  executorMemory          1500m
  executorCores           1
  totalExecutorCores      null
  propertiesFile          /home/hadoop/cloud/spark-1.5.2/conf/spark-defaults.conf
  driverMemory            1500m
  driverCores             null
  driverExtraClassPath    null
  driverExtraLibraryPath  null
  driverExtraJavaOptions  null
  supervise               false
  queue                   null
  numExecutors            32
  files                   null
  pyFiles                 null
  archives                file:/home/hadoop/xubo/tools/SparkBWA/build/bwa.zip
  mainClass               SparkBWA
  primaryResource         file:/home/hadoop/xubo/tools/SparkBWA/build/SparkBWA.jar
  name                    SparkBWA
  childArgs               [-algorithm mem -reads paired -index /home/hadoop/xubo/project/alignment/sparkBWA/index/datatest.fasta /xubo/alignment/bwa/datatest.fq /xubo/alignment/bwa/datatest.fq /xubo/alignment/output/sparkBWA/datatest4]
  jars                    null
  packages                null
  packagesExclusions      null
  repositories            null
  verbose                 true

Spark properties used, including those specified through
 --conf and those from the properties file /home/hadoop/cloud/spark-1.5.2/conf/spark-defaults.conf:
  spark.driver.memory -> 1500m
  spark.eventLog.enabled -> true
  spark.eventLog.compress -> true
  spark.executor.extraJavaOptions -> -Djava.library.path=/home/hadoop/xubo/tools/SparkBWA/build/bwa.zip
  spark.yarn.executor.memoryOverhead -> 1704
  spark.eventLog.dir -> file:///home/hadoop/Downloads/hangc/sparklog


Main class:
SparkBWA
Arguments:
-algorithm
mem
-reads
paired
-index
/home/hadoop/xubo/project/alignment/sparkBWA/index/datatest.fasta
/xubo/alignment/bwa/datatest.fq
/xubo/alignment/bwa/datatest.fq
/xubo/alignment/output/sparkBWA/datatest4
System properties:
spark.driver.memory -> 1500m
spark.executor.memory -> 1500m
spark.executor.instances -> 32
spark.eventLog.enabled -> true
spark.eventLog.compress -> true
SPARK_SUBMIT -> true
spark.executor.extraJavaOptions -> -Djava.library.path=/home/hadoop/xubo/tools/SparkBWA/build/bwa.zip
spark.app.name -> SparkBWA
spark.yarn.executor.memoryOverhead -> 1704
spark.jars -> file:/home/hadoop/xubo/tools/SparkBWA/build/SparkBWA.jar
spark.submit.deployMode -> client
spark.yarn.dist.archives -> file:/home/hadoop/xubo/tools/SparkBWA/build/bwa.zip
spark.eventLog.dir -> file:///home/hadoop/Downloads/hangc/sparklog
spark.master -> yarn-client
spark.executor.cores -> 1
Classpath elements:
file:/home/hadoop/xubo/tools/SparkBWA/build/SparkBWA.jar


16/06/20 19:32:50 ERROR YarnScheduler: Lost executor 7 on Mcnode2: remote Rpc client disassociated
16/06/20 19:32:51 ERROR YarnScheduler: Lost executor 3 on Mcnode1: remote Rpc client disassociated
16/06/20 19:32:51 ERROR TaskSetManager: Task 0 in stage 2.0 failed 4 times; aborting job
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 2.0 failed 4 times, most recent failure: Lost task 0.3 in stage 2.0 (TID 5, Mcnode1): java.lang.UnsatisfiedLinkError: no bwa in java.library.path
    at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1886)
    at java.lang.Runtime.loadLibrary0(Runtime.java:849)
    at java.lang.System.loadLibrary(System.java:1088)
    at BwaJni.<clinit>(BwaJni.java:44)
    at Bwa.run(Bwa.java:443)
    at BwaRDD$BwaAlignment.call(BwaRDD.java:283)
    at BwaRDD$BwaAlignment.call(BwaRDD.java:173)
    at org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction2$1.apply(JavaPairRDD.scala:1024)
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$mapPartitionsWithIndex$1.apply(JavaRDDLike.scala:102)
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$mapPartitionsWithIndex$1.apply(JavaRDDLike.scala:102)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$18.apply(RDD.scala:727)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$18.apply(RDD.scala:727)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:88)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1271)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1270)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1270)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
    at scala.Option.foreach(Option.scala:236)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1496)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1458)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1447)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1824)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1837)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1850)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1921)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:909)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)
    at org.apache.spark.rdd.RDD.collect(RDD.scala:908)
    at org.apache.spark.api.java.JavaRDDLike$class.collect(JavaRDDLike.scala:338)
    at org.apache.spark.api.java.AbstractJavaRDDLike.collect(JavaRDDLike.scala:47)
    at BwaRDD.MapBwa(BwaRDD.java:108)
    at BwaInterpreter.RunBwa(BwaInterpreter.java:437)
    at SparkBWA.main(SparkBWA.java:30)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:674)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.UnsatisfiedLinkError: no bwa in java.library.path
    at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1886)
    at java.lang.Runtime.loadLibrary0(Runtime.java:849)
    at java.lang.System.loadLibrary(System.java:1088)
    at BwaJni.<clinit>(BwaJni.java:44)
    at Bwa.run(Bwa.java:443)
    at BwaRDD$BwaAlignment.call(BwaRDD.java:283)
    at BwaRDD$BwaAlignment.call(BwaRDD.java:173)
    at org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction2$1.apply(JavaPairRDD.scala:1024)
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$mapPartitionsWithIndex$1.apply(JavaRDDLike.scala:102)
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$mapPartitionsWithIndex$1.apply(JavaRDDLike.scala:102)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$18.apply(RDD.scala:727)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$18.apply(RDD.scala:727)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:88)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
hadoop@Master:~/xubo/tools/SparkBWA/build$ ls
bamlite.o                                          bwamem_extra.o             bwase.o       bwtindex.o      fastmap.o                      ksw.o          QSufSort.o
bntseq.o                                           bwamem.o                   BwaSeq.class  bwt_lite.o      FastqInputFormat.class         kthread.o      SparkBWA.class
Bwa.class                                          bwamem_pair.o              bwaseqio.o    bwt.o           FastqInputFormatDouble.class   libbwa.so      SparkBWA.jar
BwaInterpreter$BigFastq2RDDDouble.class            bwa.o                      bwashm.o      bwtsw2_aux.o    FastqRecordReader.class        main.o         utils.o
BwaInterpreter$BigFastq2RDDPartitionsDouble.class  BwaOptions.class           bwa.zip       bwtsw2_chain.o  FastqRecordReaderDouble.class  malloc_wrap.o
BwaInterpreter.class                               bwape.o                    bwtaln.o      bwtsw2_core.o   is.o                           maxk.o
BwaJni.class                                       BwaRDD$BwaAlignment.class  bwtgap.o      bwtsw2_main.o   kopen.o                        paired.sh
bwa_jni.o                                          BwaRDD.class               bwt_gen.o     bwtsw2_pair.o   kstring.o                      pemerge.o
hadoop@Master:~/xubo/tools/SparkBWA/build$ vi paired.sh 
hadoop@Master:~/xubo/tools/SparkBWA/build$ ./paired.sh 
Using properties file: /home/hadoop/cloud/spark-1.5.2/conf/spark-defaults.conf
Adding default property: spark.executor.extraJavaOptions=-Djava.library.path=/home/hadoop/xubo/tools/SparkBWA/build/bwa.zip
Adding default property: spark.eventLog.enabled=true
Adding default property: spark.eventLog.dir=file:///home/hadoop/Downloads/hangc/sparklog
Adding default property: spark.eventLog.compress=true
Adding default property: spark.yarn.executor.memoryOverhead=1704
Parsed arguments:
  master                  yarn-client
  deployMode              null
  executorMemory          1500m
  executorCores           1
  totalExecutorCores      null
  propertiesFile          /home/hadoop/cloud/spark-1.5.2/conf/spark-defaults.conf
  driverMemory            1500m
  driverCores             null
  driverExtraClassPath    null
  driverExtraLibraryPath  null
  driverExtraJavaOptions  null
  supervise               false
  queue                   null
  numExecutors            32
  files                   null
  pyFiles                 null
  archives                file:/home/hadoop/xubo/tools/SparkBWA/build/bwa.zip
  mainClass               SparkBWA
  primaryResource         file:/home/hadoop/xubo/tools/SparkBWA/build/SparkBWA.jar
  name                    SparkBWA
  childArgs               [-algorithm mem -reads paired -index /home/hadoop/xubo/project/alignment/sparkBWA/index/datatest.fasta /xubo/alignment/bwa/datatest.fq /xubo/alignment/bwa/datatest.fq /xubo/alignment/output/sparkBWA/datatest4]
  jars                    null
  packages                null
  packagesExclusions      null
  repositories            null
  verbose                 true

Spark properties used, including those specified through
 --conf and those from the properties file /home/hadoop/cloud/spark-1.5.2/conf/spark-defaults.conf:
  spark.driver.memory -> 1500m
  spark.eventLog.enabled -> true
  spark.eventLog.compress -> true
  spark.executor.extraJavaOptions -> -Djava.library.path=/home/hadoop/xubo/tools/SparkBWA/build/bwa.zip
  spark.yarn.executor.memoryOverhead -> 1704
  spark.eventLog.dir -> file:///home/hadoop/Downloads/hangc/sparklog


Main class:
SparkBWA
Arguments:
-algorithm
mem
-reads
paired
-index
/home/hadoop/xubo/project/alignment/sparkBWA/index/datatest.fasta
/xubo/alignment/bwa/datatest.fq
/xubo/alignment/bwa/datatest.fq
/xubo/alignment/output/sparkBWA/datatest4
System properties:
spark.driver.memory -> 1500m
spark.executor.memory -> 1500m
spark.executor.instances -> 32
spark.eventLog.enabled -> true
spark.eventLog.compress -> true
SPARK_SUBMIT -> true
spark.executor.extraJavaOptions -> -Djava.library.path=/home/hadoop/xubo/tools/SparkBWA/build/bwa.zip
spark.app.name -> SparkBWA
spark.yarn.executor.memoryOverhead -> 1704
spark.jars -> file:/home/hadoop/xubo/tools/SparkBWA/build/SparkBWA.jar
spark.submit.deployMode -> client
spark.yarn.dist.archives -> file:/home/hadoop/xubo/tools/SparkBWA/build/bwa.zip
spark.eventLog.dir -> file:///home/hadoop/Downloads/hangc/sparklog
spark.master -> yarn-client
spark.executor.cores -> 1
Classpath elements:
file:/home/hadoop/xubo/tools/SparkBWA/build/SparkBWA.jar


16/06/20 19:33:58 ERROR YarnScheduler: Lost executor 9 on Mcnode3: remote Rpc client disassociated
16/06/20 19:33:59 ERROR YarnScheduler: Lost executor 10 on Mcnode6: remote Rpc client disassociated
16/06/20 19:34:14 ERROR TaskSetManager: Task 0 in stage 2.0 failed 4 times; aborting job
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 2.0 failed 4 times, most recent failure: Lost task 0.3 in stage 2.0 (TID 5, Mcnode3): java.io.IOException: Failed to connect to Mcnode6/219.219.220.223:42918
    at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:193)
    at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:156)
    at org.apache.spark.network.netty.NettyBlockTransferService$$anon$1.createAndStart(NettyBlockTransferService.scala:88)
    at org.apache.spark.network.shuffle.RetryingBlockFetcher.fetchAllOutstanding(RetryingBlockFetcher.java:140)
    at org.apache.spark.network.shuffle.RetryingBlockFetcher.access$200(RetryingBlockFetcher.java:43)
    at org.apache.spark.network.shuffle.RetryingBlockFetcher$1.run(RetryingBlockFetcher.java:170)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused: Mcnode6/219.219.220.223:42918
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
    at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224)
    at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:289)
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
    at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
    ... 1 more

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1271)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1270)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1270)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
    at scala.Option.foreach(Option.scala:236)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1496)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1458)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1447)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1824)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1837)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1850)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1921)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:909)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)
    at org.apache.spark.rdd.RDD.collect(RDD.scala:908)
    at org.apache.spark.api.java.JavaRDDLike$class.collect(JavaRDDLike.scala:338)
    at org.apache.spark.api.java.AbstractJavaRDDLike.collect(JavaRDDLike.scala:47)
    at BwaRDD.MapBwa(BwaRDD.java:108)
    at BwaInterpreter.RunBwa(BwaInterpreter.java:437)
    at SparkBWA.main(SparkBWA.java:30)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:674)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.io.IOException: Failed to connect to Mcnode6/219.219.220.223:42918
    at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:193)
    at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:156)
    at org.apache.spark.network.netty.NettyBlockTransferService$$anon$1.createAndStart(NettyBlockTransferService.scala:88)
    at org.apache.spark.network.shuffle.RetryingBlockFetcher.fetchAllOutstanding(RetryingBlockFetcher.java:140)
    at org.apache.spark.network.shuffle.RetryingBlockFetcher.access$200(RetryingBlockFetcher.java:43)
    at org.apache.spark.network.shuffle.RetryingBlockFetcher$1.run(RetryingBlockFetcher.java:170)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused: Mcnode6/219.219.220.223:42918
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
    at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:224)
    at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:289)
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:528)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
    at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
    ... 1 more

from sparkbwa.

Nav15 avatar Nav15 commented on August 20, 2024

YES me too getting same error: but still searching for any hint to resolve this
The data size is very low just MB.
Having cluster of 7 nodes,12GB each node and 6 cores per node.
"ERROR YarnScheduler: Lost executor 9 on Mcnode3: remote Rpc client disassociated" Refer my open issue.

from sparkbwa.

jmabuin avatar jmabuin commented on August 20, 2024

Why are you requesting 32 executors?

from sparkbwa.

xubo245 avatar xubo245 commented on August 20, 2024

@jmabuin I copy from examples in the project...
I try to modify 2 or 1 executors just now ,or delete the configure ,the error still exist.

from sparkbwa.

xubo245 avatar xubo245 commented on August 20, 2024

@Nav15 I also try to run with examples in the project(1.7G)
and 1000000 paired reads in GRCH38 chr1 (about 237M),the error still exist:

[Stage 3:>                                                          (0 + 2) / 3]16/06/20 21:26:18 ERROR YarnScheduler: Lost executor 1 on Mcnode1: remote Rpc client disassociated
[Stage 1:>                                                          (0 + 1) / 2]16/06/20 21:26:22 ERROR YarnScheduler: Lost executor 2 on Mcnode5: remote Rpc client disassociated
[Stage 3:>                                                          (0 + 2) / 3]16/06/20 21:27:21 ERROR YarnScheduler: Lost executor 4 on Mcnode4: remote Rpc client disassociated
[Stage 1:>                                                          (0 + 1) / 2]16/06/20 21:27:26 ERROR YarnScheduler: Lost executor 3 on Mcnode5: remote Rpc client disassociated

Have you run in master local or master spark://masterIP:7077 ?
Do you know how to run the project by spark-shell ?

from sparkbwa.

Nav15 avatar Nav15 commented on August 20, 2024

@xubo245, Yes, Even data in few MB is failing with "disconnected" error and few number of executor(1,2,3....) all falling down in same error.

Below is the command which I am using:
$SPARK_HOME/bin/spark-submit --class SparkBWA --master yarn-client --driver-memory 8G --executor-memory 8G --executor-cores 1 --archives bwa.zip --verbose --num-executors 1 SparkBWA.jar -algorithm mem -reads paired -index /Data/HumanBase/ECOLI -partitions 32 /user/hadoop/ecoli_1.fq /user/hadoop/ecoli_2.fq Output_ECOLI

same failing with -- --master yarn-client, --master yarn-cluster or local. Apart from this used different combinations of executors,partitions,executor-cores ,driver-memory,executor-memory. but all failed.

from sparkbwa.

Vixz7 avatar Vixz7 commented on August 20, 2024

Hi Guys,
I am also facing the similar error while running SparkBWA on my spark cluster, I have tried various configuration on local/cluster/client mode. Pasting below the error for your reference.
**-----------------------

"ERROR YarnScheduler: Lost executor 9 on Mcnode3: remote Rpc client disassociated"

java.library.path error is also popping up while running the same.**

Please suggest if you can point me to the possible reasons for the same.

thanks in advance,

from sparkbwa.

jmabuin avatar jmabuin commented on August 20, 2024

Please, can you copy-paste here the content of your conf/spark-defaults.conf file?

from sparkbwa.

jmabuin avatar jmabuin commented on August 20, 2024

I think that the error is in:
spark.executor.extraJavaOptions -Djava.library.path=/home/hadoop/xubo/tools/SparkBWA/build/bwa.zip

This variable specifies the extra java options for each executor. When passing the option --archives bwa.zip, what Hadoop/Spark does is to unzip this file in the executor working directory, so, the spark.executor.extraJavaOptions must be always ./bwa.zip , because here is where the .so file is uncompressed and where the executor will look for the shared library.

Instead, you are setting this option to the zip bile built, so, it is impossible for the executor lo find the library.

from sparkbwa.

xubo245 avatar xubo245 commented on August 20, 2024

the conf : @jmabuin

hadoop@Master:~/xubo/tools/SparkBWA/build$ cat ~/cloud/spark-1.5.2/conf/spark-defaults.conf
# Default system properties included when running spark-submit.
# This is useful for setting default environmental settings.

# Example:
#spark.master                     spark://Master:7077
spark.eventLog.enabled           true
spark.eventLog.dir               file:///home/hadoop/Downloads/hangc/sparklog
#spark.serializer                 org.apache.spark.serializer.KryoSerializer
spark.eventLog.compress      true
# spark.driver.memory              5g
#spark.executor.extraJavaOptions  -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
#spark.executor.extraJavaOptions -Xss2048m
spark.executor.extraJavaOptions -Djava.library.path=/home/hadoop/xubo/tools/SparkBWA/build/bwa.zip
spark.yarn.executor.memoryOverhead  3704

from sparkbwa.

Nav15 avatar Nav15 commented on August 20, 2024

@jmabuin I am using same conf file. This is just you suggested with only difference that you used relative path and here we using absolute path. That never would be an issue .

1: In my cluster I have installed Spark on hadoop master node as well as on 7 data node.
2: Location of SparkBWA on master node is /home/hadoop/LS_Tools/SparkBWA and complete directory structure on all 7 data nodes.
Is above two are ok from system level configuration?
3: You have not updated the below in SparkBWA version of Makefile.common located at /home/hadoop/LS_Tools/SparkBWA/Makefile.common
LIBBWA_LIBS = -lrt -lz,
which we have done in hadoop version. Hope this would be required as well?
As per your previous comment:
"Instead, you are setting this option to the zip bile built, so, it is impossible for the executor lo find the library. "
4: How to verify whether or not zip file loaded into executor working directory? How to get executors working directory?

Please can you answer above queries/issues highlighted ?

from sparkbwa.

Nav15 avatar Nav15 commented on August 20, 2024

Ran successfully. Please close and ignore above queries.
Jus want to trun this on cluster mode (spark-submit --class SparkBWA --master yarn-cluster) how to set the:
spark.executor.extraJavaOptions -> -Djava.library.path=/home/hadoop/LS_Tools/SparkBWA/build/bwa/
spark.app.name -> SparkBWA
spark.yarn.executor.memoryOverhead -> 4704
spark.submit.deployMode -> cluster
spark.master -> yarn-cluster
spark.executor.cores -> 6
Classpath elements:
Exception in thread "main" java.io.FileNotFoundException: File file:/home/hadoop/LS_Tools/SparkBWA/SparkBWA.jar does not exist
How to set that SparkBWA.jar in Classpath elements?

Thank you !!

from sparkbwa.

Nav15 avatar Nav15 commented on August 20, 2024

Ran successfully in cluster mode as well. Please close.

from sparkbwa.

xubo245 avatar xubo245 commented on August 20, 2024

Have you result after running? yarn-cluster's log information should in other node,and SparkBWA.jar should be in driver

from sparkbwa.

Nav15 avatar Nav15 commented on August 20, 2024

Yes got the .sam file as result.
Added spark.driver.extraClassPath /home/hadoop/LS_Tools/SparkBWA/SparkBWA.jar , in spark-defaults.conf, do run it in yarn-cluster node.

from sparkbwa.

xubo245 avatar xubo245 commented on August 20, 2024

Can you show me spark.defaults.conf and run shell?please
And you modify conf in every node?

from sparkbwa.

Nav15 avatar Nav15 commented on August 20, 2024

I managed to run this by:
1: Check that you have correct java version installed on master and all workers nodes:for me version is 1.7
Check this by: java -version.
2: After build, unzip the bwa.zip created inside ./SparkBWA/build/
Unzip and delete the zip file from there. You project structure should be now: ./SparkBWA/buil/bwa/
(Though I will check it later with zip as well)
3: Copy complete folder SparkBWA to all the worker nodes.This will make your directory structure like:
./SparkBWA/*
4: My spark.default.conf file is:
spark.executor.extraJavaOptions -Djava.library.path=/home/hadoop/LS_Tools/SparkBWA/build/bwa/
spark.yarn.executor.memoryOverhead 4704
spark.executor.extraClassPath /home/hadoop/LS_Tools/SparkBWA/SparkBWA.jar
spark.driver.extraClassPath /home/hadoop/LS_Tools/SparkBWA/SparkBWA.jar
(Below two are for running it on yarn-cluster mode, for yarn-client first two are sufficient).
Copy this file to all worker nodes.

5: Your .bashrc file should have entry like:
export JAVA_HOME=/opt/java/jdk1.7.0_67
export HADOOP_HOME=/opt/hadoop/hadoop-2.6.0
export HADOOP_CONF_DIR=/opt/hadoop/hadoop-2.6.0/etc/hadoop
export HADOOP_YARN_HOME=$HADOOP_HOME
export SPARK_HOME=/opt/spark/spark-1.6.1-bin-hadoop2.6
export HADOOP_OPTS="$HADOOP_OPTS -Djava.library.path=/home/hadoop/LS_Tools/SparkBWA/build/bwa"
export LD_LIBRARY_PATH=/home/hadoop/LS_Tools/SparkBWA/build/bwa

6: Run below command on master node : ./SparkBWA/build/
$SPARK_HOME/bin/spark-submit --class SparkBWA --master yarn-client --driver-memory 2G --executor-memory 2G --executor-cores 6 --verbose --num-executors 2 SparkBWA.jar -algorithm mem -reads paired -index /Data/HumanBase/ECOLI /user/hadoop/ecoli_2.fq /user/hadoop/ecoli_3.fq /user/hadoop/Output_ECOLI_cluster
Try it and let me know if you still face any issue.

from sparkbwa.

Nav15 avatar Nav15 commented on August 20, 2024

Below is terminal output during run:(parameters..)
Using properties file: /opt/spark/spark-1.6.1-bin-hadoop2.6/conf/spark-defaults.conf
Adding default property: spark.yarn.executor.memoryOverhead=4704
Adding default property: spark.executor.extraJavaOptions=-Djava.library.path=/home/hadoop/LS_Tools/SparkBWA/build/bwa/
Parsed arguments:
master yarn-client
deployMode null
executorMemory null
executorCores null
totalExecutorCores null
propertiesFile /opt/spark/spark-1.6.1-bin-hadoop2.6/conf/spark-defaults.conf
driverMemory null
driverCores null
driverExtraClassPath null
driverExtraLibraryPath null
driverExtraJavaOptions null
supervise false
queue null
numExecutors null
files null
pyFiles null
archives null
mainClass SparkBWA
primaryResource file:/home/hadoop/LS_Tools/SparkBWA/build/SparkBWA.jar
name SparkBWA
childArgs [-algorithm mem -reads paired -index /Data/HumanBase/ECOLI /user/hadoop/ecoli_2.fq /user/hadoop/ecoli_3.fq /user/hadoop/Output_ECOLI_24_3]
jars null
packages null
packagesExclusions null
repositories null
verbose true

Spark properties used, including those specified through
--conf and those from the properties file /opt/spark/spark-1.6.1-bin-hadoop2.6/conf/spark-defaults.conf:
spark.executor.extraJavaOptions -> -Djava.library.path=/home/hadoop/LS_Tools/SparkBWA/build/bwa/
spark.yarn.executor.memoryOverhead -> 4704

Main class:
SparkBWA
Arguments:
-algorithm
mem
-reads
paired
-index
/Data/HumanBase/ECOLI
/user/hadoop/ecoli_2.fq
/user/hadoop/ecoli_3.fq
/user/hadoop/Output_ECOLI_24_3
System properties:
SPARK_SUBMIT -> true
spark.executor.extraJavaOptions -> -Djava.library.path=/home/hadoop/LS_Tools/SparkBWA/build/bwa/
spark.app.name -> SparkBWA
spark.yarn.executor.memoryOverhead -> 4704
spark.jars -> file:/home/hadoop/LS_Tools/SparkBWA/build/SparkBWA.jar
spark.submit.deployMode -> client
spark.master -> yarn-client
Classpath elements:
file:/home/hadoop/LS_Tools/SparkBWA/build/SparkBWA.jar

from sparkbwa.

xubo245 avatar xubo245 commented on August 20, 2024

@Nav15 Thank you!
I have solve the problem, but didn't modify spark.defaults.conf

spark-submit --class SparkBWA \
--master yarn-client \
--conf "spark.executor.extraJavaOptions=-Djava.library.path=/home/hadoop/xubo/tools/SparkBWA/build" \
--archives ./bwa.zip \
SparkBWA.jar \
-algorithm mem -reads paired \
-index /home/hadoop/xubo/ref/GRCH38L1Index/GRCH38chr1L3556522.fasta \
-partitions 3 \
/xubo/alignment/sparkBWA/GRCH38chr1L3556522N10L50paired1.fastq /xubo/alignment/sparkBWA/GRCH38chr1L3556522N10L50paired2.fastq \
/xubo/alignment/output/sparkBWA/datatestLocalGRCH38chr1L3556522N10L50paired12Yarn

from sparkbwa.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.