Git Product home page Git Product logo

docker-spark's People

Contributors

bryceageno avatar daplho avatar dkmiller avatar dreamcodez avatar dylanmei avatar encodepanda avatar hy9be avatar jhalterman avatar krystiannowak avatar lucamilanesio avatar mrtns avatar serge-m avatar stenote avatar tisoft avatar transistorize avatar zukowskilz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-spark's Issues

Q: Why do you add extra jars to spark lib dir?

Hey,

you are adding deps to spark lib directory but they will be still missing in the driver, right?

So to be able to use hadoop-aws one still needs to do :

spark-submit --jars=file:/usr/spark-1.5.1-bin-hadoop2.6/lib/aws-java-sdk-1.7.14.jar,file:/usr/spark-1.5.1-bin-hadoop2.6/lib/hadoop-aws-2.6.0.jar,file:/usr/spark-1.5.1-bin-hadoop2.6/lib/google-collections-1.0.jar,file:/usr/spark-1.5.1-bin-hadoop2.6/lib/joda-time-2.8.2.jar

I'm wondering, aren't they then doubled on the classpath? Because spark-submit --jars would add them on classpath of executors as well, so I'm curious how do you use it.

How does Spark Master & Its worker Entries are configured..

Actually Its not an issue.. but i need some info regarding its behaviour !!

Here is My Scenario.

I am running standalone spark cluster in container using gettyimages/docker-spark.
I have compose file where , 1 service for Master (1 Replica) & 1 service for Worker (3 Replicas)

version: '3'
services:
command: echo "Hello WOrld"
master:
image: gettyimages/spark
command: bin/spark-class org.apache.spark.deploy.master.Master -h master
hostname: master
environment:
MASTER: spark://master:7077
SPARK_CONF_DIR: /conf
SPARK_PUBLIC_DNS: localhost
expose:
- 7001
- 7002
- 7003
- 7004
- 7005
- 7006
- 7077
- 6066
ports:
- 4040:4040
- 6066:6066
- 7077:7077
- 8080:8080
deploy:
placement:
constraints:
- node.role == manager
volumes:
- ./conf/master:/conf
worker:
image: gettyimages/spark
command: bin/spark-class org.apache.spark.deploy.worker.Worker spark://master:7077
hostname: worker
environment:
SPARK_CONF_DIR: /conf
SPARK_WORKER_CORES: 2
SPARK_WORKER_MEMORY: 2g
SPARK_WORKER_PORT: 8881
SPARK_WORKER_WEBUI_PORT: 8081
SPARK_PUBLIC_DNS: localhost
expose:
- 7012
- 7013
- 7014
- 7015
- 7016
- 8881
ports:
- 8081:8081
deploy:
replicas: 3
depends_on:
- master

My questions is How Spark WOrker containers attached to Spark Master ?
i dont see any entries in containers {SparkFolder}/conf/ slaves or spark-env.sh.

Could you please help me to find ?

spark-submit not working ?

Dear all,
i've tried to use spark-submit and this what i get :

docker: Error response from daemon: Mounts denied: espaces for more info.
.
ions/XAMPP/htdocs/spark-processing/count.py
is not shared from OS X and is not known to Docker.
You can configure shared paths from Docker -> Preferences... -> File Sharing.
See https://docs.docker.com/docker-for-mac/osxfs/#nam.

i've tried to Add my dir to docker and it works only if use bin/run-example always not for bin/spark-submit:

docker run --rm -it -p 4040:4040 gettyimages/spark bin/run-example SparkPi 10

Any solution for this ?

how can I up a worker use mesos uri?

I want to up a worker and its master will be mesos-master which I have set up.
however ,when I run docker-compose up ,the console shows this

worker_1  | Exception in thread "main" org.apache.spark.SparkException: Invalid master URL: spark://mesos://zk://zk1:2181 ```


this is docker-compose.yml

```worker:
  image: gettyimages/spark:2.2.0-hadoop-2.7
  command: bin/spark-class org.apache.spark.deploy.worker.Worker  mesos://zk://zk1:2181,zk2:2181,zk3:2181/mesos
  hostname: worker
  environment:
    SPARK_CONF_DIR: /conf
    SPARK_WORKER_CORES: 2
    SPARK_WORKER_MEMORY: 1g
    SPARK_WORKER_PORT: 8881
    SPARK_WORKER_WEBUI_PORT: 8081
    SPARK_PUBLIC_DNS: localhost
  expose:
    - 7012
    - 7013
    - 7014
    - 7015
    - 7016
    - 8881
  ports:
    - 8081:8081
  volumes:
    - ./conf/worker:/conf
    - ./data:/tmp/data```

Docker swarm mode container shutsdown

I have a functional swarm using docker 1.12.1 but when I execute :

docker service create --name spark-master gettyimages/spark /usr/spark-2.0.0/sbin/start-master.sh

The container keeps exiting after the start-master.sh completes. How can I keep the container running in a graceful manner?

Copying large data with splaklyr?

I'm trying to learn more about spark and to do so I installed it using this docker-compose.yml. I was able to copy a small table to it but I cannot copy a large table. I'm not sure how to debug this problem, nor if this is an appropriate place to ask for help. But in any case, if someone here can help me I would really appreciate it.

spark-submit doesn't work with python because different python verisons on driver/worker

How to reproduce:

  1. Clone repo, docker-compose up.

  2. Download spark-2.4.1-bin-hadoop2.7 from the releases page

  3. Submit python pi.py example:

./bin/spark-submit --master spark://localhost:7077 ./examples/src/main/python/pi.py 10
  1. It crashes:
โžœ spark-2.4.1-bin-hadoop2.7 ./bin/spark-submit --master spark://localhost:7077 ./examples/src/main/python/pi.py 10 19/04/25 19:19:55 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 19/04/25 19:19:55 INFO SparkContext: Running Spark version 2.4.1 19/04/25 19:19:55 INFO SparkContext: Submitted application: PythonPi 19/04/25 19:19:56 INFO SecurityManager: Changing view acls to: pablo.fernandez 19/04/25 19:19:56 INFO SecurityManager: Changing modify acls to: pablo.fernandez 19/04/25 19:19:56 INFO SecurityManager: Changing view acls groups to: 19/04/25 19:19:56 INFO SecurityManager: Changing modify acls groups to: 19/04/25 19:19:56 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(pablo.fernandez); groups with view permissions: Set(); users with modify permissions: Set(pablo.fernandez); groups with modify permissions: Set() 19/04/25 19:19:56 INFO Utils: Successfully started service 'sparkDriver' on port 61681. 19/04/25 19:19:56 INFO SparkEnv: Registering MapOutputTracker 19/04/25 19:19:56 INFO SparkEnv: Registering BlockManagerMaster 19/04/25 19:19:56 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information 19/04/25 19:19:56 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up 19/04/25 19:19:56 INFO DiskBlockManager: Created local directory at /private/var/folders/bf/62kzgm4n4sx1x1wk6vjpwm6ndrr_bx/T/blockmgr-16cee74e-3fd6-4685-9c73-ae4060f63f27 19/04/25 19:19:56 INFO MemoryStore: MemoryStore started with capacity 366.3 MB 19/04/25 19:19:56 INFO SparkEnv: Registering OutputCommitCoordinator 19/04/25 19:19:56 INFO Utils: Successfully started service 'SparkUI' on port 4040. 19/04/25 19:19:56 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.254.17.9:4040 19/04/25 19:19:56 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://localhost:7077... 19/04/25 19:19:56 INFO TransportClientFactory: Successfully created connection to localhost/127.0.0.1:7077 after 40 ms (0 ms spent in bootstraps) 19/04/25 19:19:56 INFO StandaloneSchedulerBackend: Connected to Spark cluster with app ID app-20190425221956-0004 19/04/25 19:19:56 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20190425221956-0004/0 on worker-20190425221434-172.18.0.3-8881 (172.18.0.3:8881) with 2 core(s) 19/04/25 19:19:56 INFO StandaloneSchedulerBackend: Granted executor ID app-20190425221956-0004/0 on hostPort 172.18.0.3:8881 with 2 core(s), 1024.0 MB RAM 19/04/25 19:19:56 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 61683. 19/04/25 19:19:56 INFO NettyBlockTransferService: Server created on 10.254.17.9:61683 19/04/25 19:19:56 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 19/04/25 19:19:56 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20190425221956-0004/0 is now RUNNING 19/04/25 19:19:56 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.254.17.9, 61683, None) 19/04/25 19:19:56 INFO BlockManagerMasterEndpoint: Registering block manager 10.254.17.9:61683 with 366.3 MB RAM, BlockManagerId(driver, 10.254.17.9, 61683, None) 19/04/25 19:19:56 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.254.17.9, 61683, None) 19/04/25 19:19:56 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.254.17.9, 61683, None) 19/04/25 19:19:57 INFO StandaloneSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0 19/04/25 19:19:57 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/private/tmp/spark/spark-2.4.1-bin-hadoop2.7/spark-warehouse'). 19/04/25 19:19:57 INFO SharedState: Warehouse path is 'file:/private/tmp/spark/spark-2.4.1-bin-hadoop2.7/spark-warehouse'. 19/04/25 19:19:57 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint 19/04/25 19:19:58 INFO SparkContext: Starting job: reduce at /private/tmp/spark/spark-2.4.1-bin-hadoop2.7/./examples/src/main/python/pi.py:44 19/04/25 19:19:58 INFO DAGScheduler: Got job 0 (reduce at /private/tmp/spark/spark-2.4.1-bin-hadoop2.7/./examples/src/main/python/pi.py:44) with 10 output partitions 19/04/25 19:19:58 INFO DAGScheduler: Final stage: ResultStage 0 (reduce at /private/tmp/spark/spark-2.4.1-bin-hadoop2.7/./examples/src/main/python/pi.py:44) 19/04/25 19:19:58 INFO DAGScheduler: Parents of final stage: List() 19/04/25 19:19:58 INFO DAGScheduler: Missing parents: List() 19/04/25 19:19:58 INFO DAGScheduler: Submitting ResultStage 0 (PythonRDD[1] at reduce at /private/tmp/spark/spark-2.4.1-bin-hadoop2.7/./examples/src/main/python/pi.py:44), which has no missing parents 19/04/25 19:19:58 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 6.2 KB, free 366.3 MB) 19/04/25 19:19:58 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 4.2 KB, free 366.3 MB) 19/04/25 19:19:58 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.254.17.9:61683 (size: 4.2 KB, free: 366.3 MB) 19/04/25 19:19:58 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1161 19/04/25 19:19:58 INFO DAGScheduler: Submitting 10 missing tasks from ResultStage 0 (PythonRDD[1] at reduce at /private/tmp/spark/spark-2.4.1-bin-hadoop2.7/./examples/src/main/python/pi.py:44) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)) 19/04/25 19:19:58 INFO TaskSchedulerImpl: Adding task set 0.0 with 10 tasks 19/04/25 19:20:00 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (10.254.17.9:61686) with ID 0 19/04/25 19:20:00 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 172.18.0.3, executor 0, partition 0, PROCESS_LOCAL, 7856 bytes) 19/04/25 19:20:00 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, 172.18.0.3, executor 0, partition 1, PROCESS_LOCAL, 7856 bytes) 19/04/25 19:20:00 INFO BlockManagerMasterEndpoint: Registering block manager 172.18.0.3:40993 with 434.4 MB RAM, BlockManagerId(0, 172.18.0.3, 40993, None) 19/04/25 19:20:01 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 172.18.0.3:40993 (size: 4.2 KB, free: 434.4 MB) 19/04/25 19:20:01 INFO TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, 172.18.0.3, executor 0, partition 2, PROCESS_LOCAL, 7856 bytes) 19/04/25 19:20:01 INFO TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, 172.18.0.3, executor 0, partition 3, PROCESS_LOCAL, 7856 bytes) 19/04/25 19:20:02 WARN TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1, 172.18.0.3, executor 0): org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "/usr/spark-2.4.1/python/lib/pyspark.zip/pyspark/worker.py", line 267, in main ("%d.%d" % sys.version_info[:2], version)) Exception: Python in worker has different version 3.5 than that in driver 3.7, PySpark cannot run with different minor versions.Please check environment variables PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON are correctly set.
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:452)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:588)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:571)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:406)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:945)
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:945)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:403)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:409)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:835)

19/04/25 19:20:02 INFO TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0) on 172.18.0.3, executor 0: org.apache.spark.api.python.PythonException (Traceback (most recent call last):
File "/usr/spark-2.4.1/python/lib/pyspark.zip/pyspark/worker.py", line 267, in main
("%d.%d" % sys.version_info[:2], version))
Exception: Python in worker has different version 3.5 than that in driver 3.7, PySpark cannot run with different minor versions.Please check environment variables PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON are correctly set.
) [duplicate 1]
19/04/25 19:20:02 INFO TaskSetManager: Starting task 0.1 in stage 0.0 (TID 4, 172.18.0.3, executor 0, partition 0, PROCESS_LOCAL, 7856 bytes)
19/04/25 19:20:02 INFO TaskSetManager: Lost task 2.0 in stage 0.0 (TID 2) on 172.18.0.3, executor 0: org.apache.spark.api.python.PythonException (Traceback (most recent call last):
File "/usr/spark-2.4.1/python/lib/pyspark.zip/pyspark/worker.py", line 267, in main
("%d.%d" % sys.version_info[:2], version))
Exception: Python in worker has different version 3.5 than that in driver 3.7, PySpark cannot run with different minor versions.Please check environment variables PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON are correctly set.
) [duplicate 2]
19/04/25 19:20:02 INFO TaskSetManager: Starting task 2.1 in stage 0.0 (TID 5, 172.18.0.3, executor 0, partition 2, PROCESS_LOCAL, 7856 bytes)
19/04/25 19:20:02 INFO TaskSetManager: Lost task 3.0 in stage 0.0 (TID 3) on 172.18.0.3, executor 0: org.apache.spark.api.python.PythonException (Traceback (most recent call last):
File "/usr/spark-2.4.1/python/lib/pyspark.zip/pyspark/worker.py", line 267, in main
("%d.%d" % sys.version_info[:2], version))
Exception: Python in worker has different version 3.5 than that in driver 3.7, PySpark cannot run with different minor versions.Please check environment variables PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON are correctly set.
) [duplicate 3]
19/04/25 19:20:02 INFO TaskSetManager: Starting task 3.1 in stage 0.0 (TID 6, 172.18.0.3, executor 0, partition 3, PROCESS_LOCAL, 7856 bytes)
19/04/25 19:20:02 INFO TaskSetManager: Lost task 0.1 in stage 0.0 (TID 4) on 172.18.0.3, executor 0: org.apache.spark.api.python.PythonException (Traceback (most recent call last):
File "/usr/spark-2.4.1/python/lib/pyspark.zip/pyspark/worker.py", line 267, in main
("%d.%d" % sys.version_info[:2], version))
Exception: Python in worker has different version 3.5 than that in driver 3.7, PySpark cannot run with different minor versions.Please check environment variables PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON are correctly set.
) [duplicate 4]
19/04/25 19:20:02 INFO TaskSetManager: Starting task 0.2 in stage 0.0 (TID 7, 172.18.0.3, executor 0, partition 0, PROCESS_LOCAL, 7856 bytes)
19/04/25 19:20:02 INFO TaskSetManager: Lost task 2.1 in stage 0.0 (TID 5) on 172.18.0.3, executor 0: org.apache.spark.api.python.PythonException (Traceback (most recent call last):
File "/usr/spark-2.4.1/python/lib/pyspark.zip/pyspark/worker.py", line 267, in main
("%d.%d" % sys.version_info[:2], version))
Exception: Python in worker has different version 3.5 than that in driver 3.7, PySpark cannot run with different minor versions.Please check environment variables PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON are correctly set.
) [duplicate 5]
19/04/25 19:20:02 INFO TaskSetManager: Starting task 2.2 in stage 0.0 (TID 8, 172.18.0.3, executor 0, partition 2, PROCESS_LOCAL, 7856 bytes)
19/04/25 19:20:02 INFO TaskSetManager: Lost task 0.2 in stage 0.0 (TID 7) on 172.18.0.3, executor 0: org.apache.spark.api.python.PythonException (Traceback (most recent call last):
File "/usr/spark-2.4.1/python/lib/pyspark.zip/pyspark/worker.py", line 267, in main
("%d.%d" % sys.version_info[:2], version))
Exception: Python in worker has different version 3.5 than that in driver 3.7, PySpark cannot run with different minor versions.Please check environment variables PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON are correctly set.
) [duplicate 6]
19/04/25 19:20:02 INFO TaskSetManager: Starting task 0.3 in stage 0.0 (TID 9, 172.18.0.3, executor 0, partition 0, PROCESS_LOCAL, 7856 bytes)
19/04/25 19:20:02 INFO TaskSetManager: Lost task 3.1 in stage 0.0 (TID 6) on 172.18.0.3, executor 0: org.apache.spark.api.python.PythonException (Traceback (most recent call last):
File "/usr/spark-2.4.1/python/lib/pyspark.zip/pyspark/worker.py", line 267, in main
("%d.%d" % sys.version_info[:2], version))
Exception: Python in worker has different version 3.5 than that in driver 3.7, PySpark cannot run with different minor versions.Please check environment variables PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON are correctly set.
) [duplicate 7]
19/04/25 19:20:02 INFO TaskSetManager: Starting task 3.2 in stage 0.0 (TID 10, 172.18.0.3, executor 0, partition 3, PROCESS_LOCAL, 7856 bytes)
19/04/25 19:20:02 INFO TaskSetManager: Lost task 0.3 in stage 0.0 (TID 9) on 172.18.0.3, executor 0: org.apache.spark.api.python.PythonException (Traceback (most recent call last):
File "/usr/spark-2.4.1/python/lib/pyspark.zip/pyspark/worker.py", line 267, in main
("%d.%d" % sys.version_info[:2], version))
Exception: Python in worker has different version 3.5 than that in driver 3.7, PySpark cannot run with different minor versions.Please check environment variables PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON are correctly set.
) [duplicate 8]
19/04/25 19:20:02 ERROR TaskSetManager: Task 0 in stage 0.0 failed 4 times; aborting job
19/04/25 19:20:02 INFO TaskSetManager: Lost task 2.2 in stage 0.0 (TID 8) on 172.18.0.3, executor 0: org.apache.spark.api.python.PythonException (Traceback (most recent call last):
File "/usr/spark-2.4.1/python/lib/pyspark.zip/pyspark/worker.py", line 267, in main
("%d.%d" % sys.version_info[:2], version))
Exception: Python in worker has different version 3.5 than that in driver 3.7, PySpark cannot run with different minor versions.Please check environment variables PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON are correctly set.
) [duplicate 9]
19/04/25 19:20:02 INFO TaskSchedulerImpl: Cancelling stage 0
19/04/25 19:20:02 INFO TaskSchedulerImpl: Killing all running tasks in stage 0: Stage cancelled
19/04/25 19:20:02 INFO TaskSchedulerImpl: Stage 0 was cancelled
19/04/25 19:20:02 INFO DAGScheduler: ResultStage 0 (reduce at /private/tmp/spark/spark-2.4.1-bin-hadoop2.7/./examples/src/main/python/pi.py:44) failed in 3.724 s due to Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 9, 172.18.0.3, executor 0): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/usr/spark-2.4.1/python/lib/pyspark.zip/pyspark/worker.py", line 267, in main
("%d.%d" % sys.version_info[:2], version))
Exception: Python in worker has different version 3.5 than that in driver 3.7, PySpark cannot run with different minor versions.Please check environment variables PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON are correctly set.

at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:452)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:588)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:571)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:406)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:945)
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:945)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:403)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:409)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:835)

Driver stacktrace:
19/04/25 19:20:02 INFO DAGScheduler: Job 0 failed: reduce at /private/tmp/spark/spark-2.4.1-bin-hadoop2.7/./examples/src/main/python/pi.py:44, took 3.786224 s
Traceback (most recent call last):
File "/private/tmp/spark/spark-2.4.1-bin-hadoop2.7/./examples/src/main/python/pi.py", line 44, in
count = spark.sparkContext.parallelize(range(1, n + 1), partitions).map(f).reduce(add)
File "/tmp/spark/spark-2.4.1-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/rdd.py", line 844, in reduce
File "/tmp/spark/spark-2.4.1-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/rdd.py", line 816, in collect
File "/tmp/spark/spark-2.4.1-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in call
File "/tmp/spark/spark-2.4.1-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/sql/utils.py", line 63, in deco
File "/tmp/spark/spark-2.4.1-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 9, 172.18.0.3, executor 0): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/usr/spark-2.4.1/python/lib/pyspark.zip/pyspark/worker.py", line 267, in main
("%d.%d" % sys.version_info[:2], version))
Exception: Python in worker has different version 3.5 than that in driver 3.7, PySpark cannot run with different minor versions.Please check environment variables PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON are correctly set.

at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:452)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:588)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:571)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:406)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:945)
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:945)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:403)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:409)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:835)

Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1889)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1877)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1876)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1876)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2110)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2059)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2048)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:945)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.collect(RDD.scala:944)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/usr/spark-2.4.1/python/lib/pyspark.zip/pyspark/worker.py", line 267, in main
("%d.%d" % sys.version_info[:2], version))
Exception: Python in worker has different version 3.5 than that in driver 3.7, PySpark cannot run with different minor versions.Please check environment variables PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON are correctly set.

at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:452)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:588)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:571)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:406)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:945)
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:945)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2101)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:403)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:409)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.lang.Thread.run(Thread.java:835)

19/04/25 19:20:02 WARN TaskSetManager: Lost task 3.2 in stage 0.0 (TID 10, 172.18.0.3, executor 0): TaskKilled (Stage cancelled)
19/04/25 19:20:02 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
19/04/25 19:20:02 INFO SparkContext: Invoking stop() from shutdown hook
19/04/25 19:20:02 INFO SparkUI: Stopped Spark web UI at http://10.254.17.9:4040
19/04/25 19:20:02 INFO StandaloneSchedulerBackend: Shutting down all executors
19/04/25 19:20:02 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asking each executor to shut down
19/04/25 19:20:02 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
19/04/25 19:20:02 INFO MemoryStore: MemoryStore cleared
19/04/25 19:20:02 INFO BlockManager: BlockManager stopped
19/04/25 19:20:02 INFO BlockManagerMaster: BlockManagerMaster stopped
19/04/25 19:20:02 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
19/04/25 19:20:02 INFO SparkContext: Successfully stopped SparkContext
19/04/25 19:20:02 INFO ShutdownHookManager: Shutdown hook called
19/04/25 19:20:02 INFO ShutdownHookManager: Deleting directory /private/var/folders/bf/62kzgm4n4sx1x1wk6vjpwm6ndrr_bx/T/spark-26864dbb-9c33-4058-9885-d8b28997591c
19/04/25 19:20:02 INFO ShutdownHookManager: Deleting directory /private/var/folders/bf/62kzgm4n4sx1x1wk6vjpwm6ndrr_bx/T/spark-873066ef-4a89-4dea-916b-a7f1e16d6721
19/04/25 19:20:02 INFO ShutdownHookManager: Deleting directory /private/var/folders/bf/62kzgm4n4sx1x1wk6vjpwm6ndrr_bx/T/spark-26864dbb-9c33-4058-9885-d8b28997591c/pyspark-e3d85999-c979-4504-9d1b-dcd5a17f3b42
โžœ spark-2.4.1-bin-hadoop2.7

Possible cause: python version mismatch from driver/worker

docker-compose up fails with "client version 1.21 is too old"

When I clone the repository and run docker-compose up from the root, I get:

ERROR: client version 1.21 is too old. Minimum supported API version is 1.24, please upgrade your client to a newer version

The output of docker-compose --version:

docker-compose version 1.23.2, build 1110ad01

The output of docker --version:

Docker version 18.09.0, build 4d60db4

I've made a fix locally following this GitHub issue, which I'll submit as a pull request.

Question: Use from Java app

I'm using your docker-compose.yml in a Docker container.

How can I setup a SparkSession?

	SparkSession spark = SparkSession
			.builder()
			.appName("Java Spark SQL basic example")
			.master("spark://master:7077")
				.getOrCreate();

I actually set in etc/hosts master as the docker machine IP.

The one above does not seem to work.

Exception when connecting with kafka topic

Hello, I'm using the Spark 2.4.1 image, I made a simple scala application that pulls data from a kafka topic and displays it in the console:

import org.apache.spark.sql._
import org.apache.spark.sql.functions._
import org.apache.spark.sql.streaming._
import org.apache.spark.sql.types._

object StreamHandler {
  def main(args: Array[String]): Unit = {

    // initialize Spark
    val spark = SparkSession
      .builder()
      .appName("KafkaStreaming")
      .getOrCreate()

    // avoid warnings
    import spark.implicits._

    // read from Kafka
    val inputDF = spark
      .readStream
      .format("kafka")
      .option("kafka.bootstrap.servers", "localhost:9092")
      .option("subscribe", "sometopic") 
      .option("startingOffsets", "earliest")
      .option("partition.assignment.strategy", "range")
      .load();

    val expandedDF = inputDF.selectExpr("CAST(value AS STRING)")
   
    val query = expandedDF
      .writeStream
      .outputMode("update")
      .format("console")
      .start()

    query.awaitTermination()
  }
}

The problem is when I run spark-submit I get this exception:

2021-07-27 19:30:19,993 INFO streaming.MicroBatchExecution: Using MicroBatchReader [KafkaV2[Subscribe[plates-raw]]] from DataSourceV2 named 'kafka' [org.apache.spark.sql.kafka010.KafkaSourceProvider@61532d01]
2021-07-27 19:30:20,016 INFO streaming.MicroBatchExecution: Starting new streaming query.
2021-07-27 19:30:20,030 INFO streaming.MicroBatchExecution: Stream started from {}
2021-07-27 19:30:20,172 WARN kafka010.KafkaOffsetReader: Error in attempt 1 getting Kafka offsets:
org.apache.kafka.common.config.ConfigException: Missing required configuration "partition.assignment.strategy" which has no default value.
        at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:124)

This is the spark-submit command:

docker exec -it spark-master ./bin/spark-submit\
--packages "org.apache.spark:spark-sql-kafka-0-10_2.11:2.4.0\
--class StreamHandler --master spark://master:7077 --executor-memory 2G --total-executor-cores 2\
/tmp/data/streamhandler_2.11-0.1.jar

I tried the Spark 2.4.1 image and I still got the same error, this app works normally with Spark 2.2.0 image, but I need to use Spark 2.4 or above so that I can later save the topic values in Cassandra with .foreachBatch

Any help would be greatly appreciated!

Switch Python to Miniconda

apt-get is fine, but miniconda's installation is a little "cleaner" , with the added benefit of getting conda available.

In addition, installation of Pandas into the image would be really useful

Deploying Spark on a Swarm Cluster

Hello there,
I am using your image to create a cluster of Spark on a Swarm cluster. here is what I do:
$ docker network create --driver overlay spark
$ docker service create --name master --network spark --constraint 'node.labels.name==manager-node' -p 7077:7077 -p 8080:8080 gettyimages/spark:2.1.0-hadoop-2.7 bin/spark-class org.apache.spark.deploy.master.Master
$ docker service create --name worker --network spark --constraint 'node.labels.name==worker-node' -p 8081:8080 gettyimages/spark:2.1.0-hadoop-2.7 bin/spark-class org.apache.spark.deploy.worker.Worker spark://10.7.1.3:7077
in which 10.7.1.3 is the ip address that I see in the Spark master webUI.
However, the Spark worker doesn't show up in the master webUI. The port 7077 is open on the Swarm manager node. But if I deploy the Spark worker and master on the same node, Spark master can see the worker. I am using the latest version of Docker. I am not sure if your image has been built for such a scenario? Any hint what might be the reason for this?

Thanks,
Henaras

Problem with Jupyter

I've tried to install Jupyter on top of your setup like this:

# spark.dockerfile
FROM gettyimages/spark

RUN apt-get -y update
RUN apt-get -y install python-dev python-pip python-numpy 
RUN apt-get -y install python-scipy python-pandas gfortran 
RUN apt-get -y install ipython ipython-notebook

RUN pip3 install --upgrade pip 
RUN pip3 install jupyter 

and then created a container in docker-compose to run jupyter

jupyter:
    build:
      context: .
      dockerfile: spark.dockerfile
    expose:
      - 7001
      - 7002
      - 7003
      - 7004
      - 7005
      - 7006
      - 7077
      - 6066
    ports:
      - 8888:8888
    environment:
      PYSPARK_DRIVER_PYTHON: ipython 
      PYSPARK_DRIVER_PYTHON_OPTS: 'notebook --ip=0.0.0.0 --no-browser --debug'
    command: jupyter notebook --ip=0.0.0.0 --no-browser --debug

However the Kernel is never able to start and I get messages like these:

Connecting to: tcp://127.0.0.1:43250
jupyter_1 | [I 17:48:34.044 NotebookApp] KernelRestarter: restarting kernel (1/5)
jupyter_1 | [D 17:48:34.045 NotebookApp] Starting kernel: ['/usr/bin/python3', '-m', 'ipykernel', '-f', >'/root/.local/share/jupyter/runtime/kernel-4579c251-9
bb9-4d3f-ae26-6d5f75a175a5.json']
jupyter_1 | [D 17:48:34.047 NotebookApp] Connecting to: tcp://127.0.0.1:58451
jupyter_1 | [I 17:48:37.050 NotebookApp] KernelRestarter: restarting kernel (2/5)
jupyter_1 | [D 17:48:37.050 NotebookApp] Starting kernel: ['/usr/bin/python3', '-m', 'ipykernel', '-f', >'/root/.local/share/jupyter/runtime/kernel-4579c251-9
bb9-4d3f-ae26-6d5f75a175a5.json']
jupyter_1 | [D 17:48:37.052 NotebookApp] Connecting to: tcp://127.0.0.1:58451
jupyter_1 | [I 17:48:40.054 NotebookApp] KernelRestarter: restarting kernel (3/5)

Any idea why? Using this other command doesn't help either

/usr/spark-2.0.0/bin/pyspark --master spark://master:7077

Notice that PYSPARK_DRIVER_PYTHON and PYSPARK_DRIVER_PYTHON_OPTS are already set in the docker compose file.

File .yml version 1 is not supported

Hi
I have run the file .yml but i got the error shows that version 1 is not supported. Is their new version of the (.yml) file that support the new release docker swarm ?

"100M" Error reading csv file from S3

I am running this command:
spark-submit load_ratings.py --conf "fs.s3a.multipart.size=104857600" and I encountered an error running my Python Script that says:

py4j.protocol.Py4JJavaError: An error occurred while calling o39.csv. [2020-05-26 06:31:14,237] {bash_operator.py:126} INFO - : java.lang.NumberFormatException: For input string: "100M"

Error Trace:
Traceback (most recent call last):
[2020-05-26 08:58:46,289] {bash_operator.py:126} INFO - File "/usr/local/airflow/dags/python_scripts/load_ratings.py", line 52, in
[2020-05-26 08:58:46,307] {bash_operator.py:126} INFO - schema=ratings_schema)
[2020-05-26 08:58:46,309] {bash_operator.py:126} INFO - File "/usr/spark-2.4.1/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 472, in csv
[2020-05-26 08:58:46,332] {bash_operator.py:126} INFO - File "/usr/spark-2.4.1/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in call
[2020-05-26 08:58:46,339] {bash_operator.py:126} INFO - File "/usr/spark-2.4.1/python/lib/pyspark.zip/pyspark/sql/utils.py", line 63, in deco
[2020-05-26 08:58:46,347] {bash_operator.py:126} INFO - File "/usr/spark-2.4.1/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
[2020-05-26 08:58:46,380] {bash_operator.py:126} INFO - py4j.protocol.Py4JJavaError: An error occurred while calling o30.csv.
[2020-05-26 08:58:46,381] {bash_operator.py:126} INFO - : java.lang.NumberFormatException: For input string: "100M"
[2020-05-26 08:58:46,381] {bash_operator.py:126} INFO - at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
[2020-05-26 08:58:46,381] {bash_operator.py:126} INFO - at java.lang.Long.parseLong(Long.java:589)
[2020-05-26 08:58:46,382] {bash_operator.py:126} INFO - at java.lang.Long.parseLong(Long.java:631)
[2020-05-26 08:58:46,382] {bash_operator.py:126} INFO - at org.apache.hadoop.conf.Configuration.getLong(Configuration.java:1499)
[2020-05-26 08:58:46,382] {bash_operator.py:126} INFO - at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:248)
[2020-05-26 08:58:46,382] {bash_operator.py:126} INFO - at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3288)
[2020-05-26 08:58:46,383] {bash_operator.py:126} INFO - at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:123)
[2020-05-26 08:58:46,383] {bash_operator.py:126} INFO - at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3337)
[2020-05-26 08:58:46,383] {bash_operator.py:126} INFO - at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3305)
[2020-05-26 08:58:46,384] {bash_operator.py:126} INFO - at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:476)
[2020-05-26 08:58:46,390] {bash_operator.py:126} INFO - at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
[2020-05-26 08:58:46,398] {bash_operator.py:126} INFO - at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:547)
[2020-05-26 08:58:46,404] {bash_operator.py:126} INFO - at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:545)
[2020-05-26 08:58:46,404] {bash_operator.py:126} INFO - at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
[2020-05-26 08:58:46,410] {bash_operator.py:126} INFO - at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
[2020-05-26 08:58:46,410] {bash_operator.py:126} INFO - at scala.collection.immutable.List.foreach(List.scala:392)
[2020-05-26 08:58:46,420] {bash_operator.py:126} INFO - at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
[2020-05-26 08:58:46,421] {bash_operator.py:126} INFO - at scala.collection.immutable.List.flatMap(List.scala:355)
[2020-05-26 08:58:46,421] {bash_operator.py:126} INFO - at org.apache.spark.sql.execution.datasources.DataSource.org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary(DataSource.scala:545)
[2020-05-26 08:58:46,421] {bash_operator.py:126} INFO - at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:359)
[2020-05-26 08:58:46,422] {bash_operator.py:126} INFO - at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
[2020-05-26 08:58:46,431] {bash_operator.py:126} INFO - at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
[2020-05-26 08:58:46,437] {bash_operator.py:126} INFO - at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:615)
[2020-05-26 08:58:46,443] {bash_operator.py:126} INFO - at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[2020-05-26 08:58:46,450] {bash_operator.py:126} INFO - at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[2020-05-26 08:58:46,451] {bash_operator.py:126} INFO - at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[2020-05-26 08:58:46,456] {bash_operator.py:126} INFO - at java.lang.reflect.Method.invoke(Method.java:498)
[2020-05-26 08:58:46,456] {bash_operator.py:126} INFO - at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
[2020-05-26 08:58:46,457] {bash_operator.py:126} INFO - at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
[2020-05-26 08:58:46,475] {bash_operator.py:126} INFO - at py4j.Gateway.invoke(Gateway.java:282)
[2020-05-26 08:58:46,475] {bash_operator.py:126} INFO - at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
[2020-05-26 08:58:46,476] {bash_operator.py:126} INFO - at py4j.commands.CallCommand.execute(CallCommand.java:79)
[2020-05-26 08:58:46,495] {bash_operator.py:126} INFO - at py4j.GatewayConnection.run(GatewayConnection.java:238)
[2020-05-26 08:58:46,496] {bash_operator.py:126} INFO - at java.lang.Thread.run(Thread.java:748)

Not sure what happened here, but I am guessing fs.s3a.multipart.size should not be configured to "100M". I already tried to change it in my spark-submit command but somehow it was still not picked up.

Link to my code repo:
https://github.com/alanchn31/Udacity-DE-ND-Capstone

reading from s3 error

Running into issues trying to read from aws s3.

%pyspark
df= spark.read.csv("s3://test/muppal/sample.csv")

I get eror:

Py4JJavaError: An error occurred while calling o54.csv.
: org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme "s3"
	at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3266)
	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3286)
	at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:123)
	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3337)
	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3305)
	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:476)
	at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
.......

And after I load manually the libraries, I get:

org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.S3ServiceException: Request Error: java.lang.IllegalStateException: Invalid class name: org.jets3t.service.utils.RestUtils$ConnManagerFactory
  at org.apache.hadoop.fs.s3.Jets3tFileSystemStore.get(Jets3tFileSystemStore.java:175)
  at org.apache.hadoop.fs.s3.Jets3tFileSystemStore.retrieveINode(Jets3tFileSystemStore.java:221)
  at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.base/java.lang.reflect.Method.invoke(Method.java:567)

/usr/spark-1.6.2 belongs to... nobody

When those Containers (gettyimages/spark:1.6.2-hadoop-2.6) are used (as a Slave) on a Apcera Platform, the following exceptions are raised:

[system-error] Process 'app' failed with status "exited(1)"
[stderr] 16/08/31 20:35:20 ERROR worker.Worker: Failed to create work directory /usr/spark-1.6.2/work

Indeed, the owner of spark-1.6.2 is... nobody:

root@ip-169-254-0-25:/# cd usr
root@ip-169-254-0-25:/usr# ls
bin  games  hadoop-2.6.3  include  java  jdk1.8.0_92  lib  local  sbin  share  spark-1.6.2  src
root@ip-169-254-0-25:/usr# ls -al
total 52
drwxr-xr-x 23 root   root    4096 Aug 26 23:31 .
drwxr-xr-x 41 root   root    4096 Aug 26 23:25 ..
drwxr-xr-x  2 root   root    4096 Aug 26 22:27 bin
drwxr-xr-x  2 root   root    4096 Aug 26 22:27 games
drwxr-xr-x  9  10021   10021 4096 Aug 26 22:27 hadoop-2.6.3
drwxr-xr-x  2 root   root    4096 Aug 26 22:27 include
lrwxrwxrwx  1 root   root      16 Aug 26 22:27 java -> /usr/jdk1.8.0_92
drwxr-xr-x  7 uucp       143 4096 Aug 26 22:28 jdk1.8.0_92
drwxr-xr-x 29 root   root    4096 Aug 26 23:18 lib
drwxrwsr-x 13 root   staff   4096 Aug 26 22:27 local
drwxr-xr-x  2 root   root    4096 Aug 26 22:27 sbin
drwxr-xr-x 62 root   root    4096 Aug 26 22:27 share
drwxr-xr-x 12 nobody nogroup 4096 Aug 26 22:28 spark-1.6.2
drwxr-xr-x  2 root   root    4096 Aug 26 22:28 src

Looks like Docker is natively able to get around that ownership, but not some other platforms... Therefore a chown would be useful.
Alternative (?): create in advance the /usr/spark-1.6.2/work/ directory under the root ownership.

Have you been able to launch jobs with Java?

Hi ,

I am running Spark with the following configuration:

version: '2'
services:
  master:
    image: gettyimages/spark
    command: bin/spark-class org.apache.spark.deploy.master.Master -h master
    hostname: master
    environment:
      MASTER: spark://master:7077
      SPARK_CONF_DIR: /conf
      SPARK_PUBLIC_DNS: localhost
    expose:
      - 7001
      - 7002
      - 7003
      - 7004
      - 7005
      - 7006
      - 7077
      - 6066
    ports:
      - 4040:4040
      - 6066:6066
      - 7077:7077
      - 8080:8080
  worker:
    image: gettyimages/spark
    command: bin/spark-class org.apache.spark.deploy.worker.Worker spark://master:7077
    hostname: worker
    environment:
      SPARK_CONF_DIR: /conf
      SPARK_WORKER_CORES: 2
      SPARK_WORKER_MEMORY: 1g
      SPARK_WORKER_PORT: 8881
      SPARK_WORKER_WEBUI_PORT: 8081
      SPARK_PUBLIC_DNS: localhost
    links:
      - master
    expose:
      - 7012
      - 7013
      - 7014
      - 7015
      - 7016
      - 8881
    ports:
      - 8081:8081

And I have the following simple Java program:

SparkConf conf = new SparkConf().setMaster("spark://localhost:7077").setAppName("Word Count Sample App");
conf.set("spark.dynamicAllocation.enabled","false");
String file = "test.txt";
JavaSparkContext sc = new JavaSparkContext(conf);
JavaRDD<String> textFile = sc.textFile("src/main/resources/" + file);
JavaPairRDD<String, Integer> counts = textFile.flatMap(s -> Arrays.asList(s.split("[ ,]")).iterator()).mapToPair(word -> new Tuple2<>(word, 1)).reduceByKey((a, b) -> a + b);counts.foreach(p -> System.out.println(p));
System.out.println("Total words: " + counts.count());
counts.saveAsTextFile(file + "out.txt");

The problem that I am having is that it is generating the following command :

Spark Executor Command: "/usr/jdk1.8.0_131/bin/java" "-cp" "/conf:/usr/spark-2.3.0/jars/*:/usr/hadoop-2.8.3/etc/hadoop/:/usr/hadoop-2.8.3/etc/hadoop/*:/usr/hadoop-2.8.3/share/hadoop/common/lib/*:/usr/hadoop-2.8.3/share/hadoop/common/*:/usr/hadoop-2.8.3/share/hadoop/hdfs/*:/usr/hadoop-2.8.3/share/hadoop/hdfs/lib/*:/usr/hadoop-2.8.3/share/hadoop/yarn/lib/*:/usr/hadoop-2.8.3/share/hadoop/yarn/*:/usr/hadoop-2.8.3/share/hadoop/mapreduce/lib/*:/usr/hadoop-2.8.3/share/hadoop/mapreduce/*:/usr/hadoop-2.8.3/share/hadoop/tools/lib/*" "-Xmx1024M" "-Dspark.driver.port=59906" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "--driver-url" "spark://CoarseGrainedScheduler@yeikel-pc:59906" "--executor-id" "6" "--hostname" "172.19.0.3" "--cores" "2" "--app-id" "app-20180401005243-0000" "--worker-url" "spark://[email protected]:8881"

Which results in

Caused by: java.io.IOException: Failed to connect to yeikel-pc:59906
	at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:245)
	at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:187)
	at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:198)
	at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:194)
	at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:190)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.UnknownHostException: yeikel-pc

``



Q: what do you use for orchestration?

Hi there!

Thanks for the awesome work, got this to work locally via docker compose in no time - I was just curious what tools do you guys to orchestrate spark containers in a cluster?

While compose translates well to using docker swarm, kubernetes's core concepts (pods, services, and replication controllers) make a lot more sense to me. It would be awesome if you guys could share a little bit about how you orchestrate spark containers in a cluster :D

spark-cassandra-connector can't connect from spark docker cluster

Hi,
I'm trying to run spark job from my scala application in this cluster initialized by docker-compose and it immediately disconnects from Cassandra cluster. It works fine if I run spark in local mode. My cassandra is on the Amazon EC2 instance and it's available from my local machine where this cluster is also running.

Could this be related to the network configuration of the cluster? Logs I get in my applicaiton looks like this:

 INFO	[2017-02-14 20:33:32,529] [DatabaseScheduler_Worker-1] connector.cql.CassandraConnector (Logging.scala:35)     	Connected to Cassandra cluster: Test Cluster
 INFO	[2017-02-14 20:33:40,607] [pool-79-thread-1] connector.cql.CassandraConnector (Logging.scala:35)     	Disconnected from Cassandra cluster: Test Cluster

Elasticsearch Spark ! just asking

Hello there,

I'm just wondering about what can i do to write on distant ElasticSearch server.
All tutorial that i can find until now are talking about default configuration but how can i change host, port ... no idea ...
Any idea about how to do it ? maybe add it Documentation ?

Many thanks.

SSH question?

Hello,
Thanks for sharing this great work. It worked very well on my machine. But as I am a new learner of Docker and Hadoop/Spark. I got a confusing question about ssh when reading the Dockerfile.

Traditionally, when setting up a multi-node cluster we will need to set up ssh and hosts files in master/slave hosts, to enable communications between hosts. But in the Dockerfile I didn't find anything related to ssh. Even no ssh service is installed.

So I am really wondering how the master node is controlling slave nodes without ssh?

Sorry to disturb if you think this question is stupid, as I am new in this field.

Alex

Multiple worker setup

Hey,

I'm trying to leverage m4.4xlarge machine in a single-node setup so I'd like to run 2 workers. I guess that setting SPARK_WORKER_INSTANCES: 2 still creates just a single worker, right? Do I have to create 2 worker containers, each with SPARK_WORKER_INSTANCES: 2 ?

This SPARK_WORKER_INSTANCES is somewhat misleading because imho one has to create an extra org.apache.spark.deploy.worker.Worker process/container explicitly.

4040 sparkui not running

Hi,
where can I find the defaults spark UI which gives a great overview of the jobs? Neither on master nor on worker seems to be something running at 4040.

spark-shell error reading parquet

I just recently pulled the latest using gettyimages/spark:2.4.1-hadoop-3.0 and when I ran the following code, I received an obscure java.lang.IllegalArgumentException: Unsupported class file major version 56 error. Here's the full spark-shell session below:

root@master:/usr/spark-2.4.1# spark-shell
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/usr/spark-2.4.1/jars/spark-unsafe_2.11-2.4.1.jar) to method java.nio.Bits.unaligned()
WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
2019-04-19 05:12:25,480 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://localhost:4040
Spark context available as 'sc' (master = spark://master:7077, app id = app-20190419051234-0001).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.4.1
      /_/
         
Using Scala version 2.11.12 (Java HotSpot(TM) 64-Bit Server VM, Java 12.0.1)
Type in expressions to have them evaluated.
Type :help for more information.

scala> val df = Seq((1,2,3),(4,5,6)).toDF("a", "b", "c")
df: org.apache.spark.sql.DataFrame = [a: int, b: int ... 1 more field]

scala> df.show(false)
+---+---+---+
|a  |b  |c  |
+---+---+---+
|1  |2  |3  |
|4  |5  |6  |
+---+---+---+


scala> df.write.parquet("blah")
                                                                                
scala> val x = spark.read.parquet("blah")
java.lang.IllegalArgumentException: Unsupported class file major version 56
  at org.apache.xbean.asm6.ClassReader.<init>(ClassReader.java:166)
  at org.apache.xbean.asm6.ClassReader.<init>(ClassReader.java:148)
  at org.apache.xbean.asm6.ClassReader.<init>(ClassReader.java:136)
  at org.apache.xbean.asm6.ClassReader.<init>(ClassReader.java:237)
  at org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:49)
  at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:517)
  at org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:500)
  at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
  at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:134)
  at scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:134)
  at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:236)
  at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
  at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:134)
  at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
  at org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:500)
  at org.apache.xbean.asm6.ClassReader.readCode(ClassReader.java:2175)
  at org.apache.xbean.asm6.ClassReader.readMethod(ClassReader.java:1238)
  at org.apache.xbean.asm6.ClassReader.accept(ClassReader.java:631)
  at org.apache.xbean.asm6.ClassReader.accept(ClassReader.java:355)
  at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:307)
  at org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:306)
  at scala.collection.immutable.List.foreach(List.scala:392)
  at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:306)
  at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:162)
  at org.apache.spark.SparkContext.clean(SparkContext.scala:2326)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:2100)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
  at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:945)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
  at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
  at org.apache.spark.rdd.RDD.collect(RDD.scala:944)
  at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$.mergeSchemasInParallel(ParquetFileFormat.scala:633)
  at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.inferSchema(ParquetFileFormat.scala:241)
  at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$6.apply(DataSource.scala:180)
  at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$6.apply(DataSource.scala:180)
  at scala.Option.orElse(Option.scala:289)
  at org.apache.spark.sql.execution.datasources.DataSource.getOrInferFileFormatSchema(DataSource.scala:179)
  at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:373)
  at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
  at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:641)
  at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:625)
  ... 49 elided

scala> 

Any ideas on what could have caused this? Is it b/c of the base imagecdebian:stretch without any tag qualifier?

spark-submit job.py can't use sparkConf to pass configuration parameters

I'm submitting my python job using the bin/spark-submit script. I want to configure the parameter in the python code through SparkConf & SparkContext class. I tried to set appName, master, spark.cores.max and spark.scheduler.mode in SparkConf object and pass it as an argument(conf) to the SparkContext. It turned out that the job was not sent to the master of the standalone cluster I set up(it just ran locally). Through the print statements below, it's clearly to see the SparkConf I passed to the SparkContext has all 4 configurations() but the conf variable from the SparkContext object doesn't have any of my updates despite of the default conf.

For other ways, I tried using conf/spark-defaults.conf or --properties-file my-spark.conf --master spark://spark-master:7077. It works.
I also tried to set master using master parameter seperately sc = pyspark.SparkContext(master="spark://spark-master:7077", conf=conf) and it worked as well!

So it seems that only the conf parameter cannot be congested by the SparkContext correctly.

You can find the master got updated.

SparkContextConf: [('spark.app.name', 'test.py'), ('spark.rdd.compress', 'True'), ('spark.driver.port', '41147'), ('spark.app.id', 'app-20170403234627-0001'), ('spark.master', 'spark://spark-master:7077'), ('spark.serializer.objectStreamReset', '100'), ('spark.executor.id', 'driver'), ('spark.submit.deployMode', 'client'), ('spark.files', 'file:/mnt/scratch/yidi/docker-volume/test.py'), ('spark.driver.host', '10.0.0.5')]

This is my python job code.

import operator
import pyspark

def main():
    '''Program entry point'''

    import socket
    print(socket.gethostbyname(socket.gethostname()))
    print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!')
    num_cores = 4
    conf = pyspark.SparkConf()
    conf.setAppName("word-count").setMaster("spark://spark-master:7077")#setExecutorEnv("CLASSPATH", path)
    conf.set("spark.scheduler.mode", "FAIR")
    conf.set("spark.cores.max", num_cores)
    print("Conf: {}".format(conf.getAll()))
    sc = pyspark.SparkContext(conf=conf)
    print("SparkContextConf: {}".format(sc.getConf().getAll()))

    #Intialize a spark context
    # with pyspark.SparkContext(conf=conf) as sc:
    #     #Get a RDD containing lines from this script file  
    #     lines = sc.textFile("/mnt/scratch/yidi/docker-volume/war_and_peace.txt")
    #     #Split each line into words and assign a frequency of 1 to each word
    #     words = lines.flatMap(lambda line: line.split(" ")).map(lambda word: (word, 1))
    #     #count the frequency for words
    #     counts = words.reduceByKey(operator.add)
    #     #Sort the counts in descending order based on the word frequency
    #     sorted_counts =  counts.sortBy(lambda x: x[1], False)
    #     #Get an iterator over the counts to print a word and its frequency
    #     for word,count in sorted_counts.toLocalIterator():
    #         print("{} --> {}".format(word.encode('utf-8'), count))
    # print "Spark conf: ", conf.getAll()
if __name__ == "__main__":
    main()

Console output running the job above:

root@bdd1ba99bf4f:/usr/spark-2.1.0# bin/spark-submit /mnt/scratch/yidi/docker-volume/test.py 
10.0.0.5
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Conf: dict_items([('spark.scheduler.mode', 'FAIR'), ('spark.app.name', 'word-count'), ('spark.master', 'spark://spark-master:7077'), ('spark.cores.max', '4')])
17/04/03 23:24:27 INFO spark.SparkContext: Running Spark version 2.1.0
17/04/03 23:24:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/04/03 23:24:27 INFO spark.SecurityManager: Changing view acls to: root
17/04/03 23:24:27 INFO spark.SecurityManager: Changing modify acls to: root
17/04/03 23:24:27 INFO spark.SecurityManager: Changing view acls groups to: 
17/04/03 23:24:27 INFO spark.SecurityManager: Changing modify acls groups to: 
17/04/03 23:24:27 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(root); groups with view permissions: Set(); users  with modify permissions: Set(root); groups with modify permissions: Set()
17/04/03 23:24:27 INFO util.Utils: Successfully started service 'sparkDriver' on port 38193.
17/04/03 23:24:27 INFO spark.SparkEnv: Registering MapOutputTracker
17/04/03 23:24:27 INFO spark.SparkEnv: Registering BlockManagerMaster
17/04/03 23:24:27 INFO storage.BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
17/04/03 23:24:27 INFO storage.BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
17/04/03 23:24:27 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-7b545c86-9344-4f73-a3db-e891c562714d
17/04/03 23:24:27 INFO memory.MemoryStore: MemoryStore started with capacity 366.3 MB
17/04/03 23:24:27 INFO spark.SparkEnv: Registering OutputCommitCoordinator
17/04/03 23:24:27 INFO util.log: Logging initialized @1090ms
17/04/03 23:24:27 INFO server.Server: jetty-9.2.z-SNAPSHOT
17/04/03 23:24:27 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3950ddbb{/jobs,null,AVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@286430d5{/jobs/json,null,AVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@404eb034{/jobs/job,null,AVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@79e3feec{/jobs/job/json,null,AVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4661596e{/stages,null,AVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4f8a3bef{/stages/json,null,AVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7a70fd3a{/stages/stage,null,AVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1c826806{/stages/stage/json,null,AVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@50e4e8d1{/stages/pool,null,AVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4e2fe461{/stages/pool/json,null,AVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@33cb59b3{/storage,null,AVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3c06c594{/storage/json,null,AVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4b5300a5{/storage/rdd,null,AVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7a6ef942{/storage/rdd/json,null,AVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1301217d{/environment,null,AVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@19217cec{/environment/json,null,AVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4a241145{/executors,null,AVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@470d45aa{/executors/json,null,AVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5d9d8eff{/executors/threadDump,null,AVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4fc94fbc{/executors/threadDump/json,null,AVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@258dd139{/static,null,AVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@880f037{/,null,AVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@395b7dae{/api,null,AVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3cea7196{/jobs/job/kill,null,AVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@77257b2b{/stages/stage/kill,null,AVAILABLE}
17/04/03 23:24:27 INFO server.ServerConnector: Started ServerConnector@788dbc45{HTTP/1.1}{0.0.0.0:4040}
17/04/03 23:24:27 INFO server.Server: Started @1154ms
17/04/03 23:24:27 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
17/04/03 23:24:27 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://localhost:4040
17/04/03 23:24:27 INFO spark.SparkContext: Added file file:/mnt/scratch/yidi/docker-volume/test.py at file:/mnt/scratch/yidi/docker-volume/test.py with timestamp 1491261867734
17/04/03 23:24:27 INFO util.Utils: Copying /mnt/scratch/yidi/docker-volume/test.py to /tmp/spark-5bb22b93-172d-40d6-8a37-6e3da5dc15a6/userFiles-ed0a6692-b96a-4b3d-be16-215a21cf836b/test.py
17/04/03 23:24:27 INFO executor.Executor: Starting executor ID driver on host localhost
17/04/03 23:24:27 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 44223.
17/04/03 23:24:27 INFO netty.NettyBlockTransferService: Server created on 10.0.0.5:44223
17/04/03 23:24:27 INFO storage.BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
17/04/03 23:24:27 INFO storage.BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.0.0.5, 44223, None)
17/04/03 23:24:27 INFO storage.BlockManagerMasterEndpoint: Registering block manager 10.0.0.5:44223 with 366.3 MB RAM, BlockManagerId(driver, 10.0.0.5, 44223, None)
17/04/03 23:24:27 INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.0.0.5, 44223, None)
17/04/03 23:24:27 INFO storage.BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.0.0.5, 44223, None)
17/04/03 23:24:27 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@c06133a{/metrics/json,null,AVAILABLE}
SparkContextConf: [('spark.app.name', 'test.py'), ('spark.rdd.compress', 'True'), ('spark.serializer.objectStreamReset', '100'), ('spark.master', 'local[*]'), ('spark.executor.id', 'driver'), ('spark.submit.deployMode', 'client'), ('spark.files', 'file:/mnt/scratch/yidi/docker-volume/test.py'), ('spark.driver.host', '10.0.0.5'), ('spark.app.id', 'local-1491261867756'), ('spark.driver.port', '38193')]
17/04/03 23:24:27 INFO spark.SparkContext: Invoking stop() from shutdown hook
17/04/03 23:24:27 INFO server.ServerConnector: Stopped ServerConnector@788dbc45{HTTP/1.1}{0.0.0.0:4040}
17/04/03 23:24:27 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@77257b2b{/stages/stage/kill,null,UNAVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@3cea7196{/jobs/job/kill,null,UNAVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@395b7dae{/api,null,UNAVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@880f037{/,null,UNAVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@258dd139{/static,null,UNAVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@4fc94fbc{/executors/threadDump/json,null,UNAVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@5d9d8eff{/executors/threadDump,null,UNAVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@470d45aa{/executors/json,null,UNAVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@4a241145{/executors,null,UNAVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@19217cec{/environment/json,null,UNAVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@1301217d{/environment,null,UNAVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@7a6ef942{/storage/rdd/json,null,UNAVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@4b5300a5{/storage/rdd,null,UNAVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@3c06c594{/storage/json,null,UNAVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@33cb59b3{/storage,null,UNAVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@4e2fe461{/stages/pool/json,null,UNAVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@50e4e8d1{/stages/pool,null,UNAVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@1c826806{/stages/stage/json,null,UNAVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@7a70fd3a{/stages/stage,null,UNAVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@4f8a3bef{/stages/json,null,UNAVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@4661596e{/stages,null,UNAVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@79e3feec{/jobs/job/json,null,UNAVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@404eb034{/jobs/job,null,UNAVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@286430d5{/jobs/json,null,UNAVAILABLE}
17/04/03 23:24:27 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@3950ddbb{/jobs,null,UNAVAILABLE}
17/04/03 23:24:27 INFO ui.SparkUI: Stopped Spark web UI at http://localhost:4040
17/04/03 23:24:27 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
17/04/03 23:24:27 INFO memory.MemoryStore: MemoryStore cleared
17/04/03 23:24:27 INFO storage.BlockManager: BlockManager stopped
17/04/03 23:24:27 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
17/04/03 23:24:27 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
17/04/03 23:24:27 INFO spark.SparkContext: Successfully stopped SparkContext
17/04/03 23:24:27 INFO util.ShutdownHookManager: Shutdown hook called
17/04/03 23:24:27 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-5bb22b93-172d-40d6-8a37-6e3da5dc15a6/pyspark-50d75419-36b8-4dd2-a7c6-7d500c7c5bca
17/04/03 23:24:27 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-5bb22b93-172d-40d6-8a37-6e3da5dc15a6

master/worker unable to start/kill.

Hi guys, thank you for this image. Idea and simplicity are great.
But, i am experiencing strange problems with this image. I am running it within the docker-compose and behaviour of master/worker containers is very unpredictable. e.g. Only master is stared, without worker. But, observing through docker ps shows that there are both containers running. docker kill "id" is able only to kill a master, but within the worker it just hangs.

I have this behaviour with docker 1.9.1 and OS X 10.11.3 El Capitan also i've reproduced it on Ubuntu 14.04.

Do you have any idea what it could be ?

Thanks,
Alex

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.