一、现象
用java写的在spark on yarn 模式下正常。用python写的在spark on local 模式下正常,在spark on yarn 模式下出错。
Exception
: Randomness of hash of string should be disabled via PYTHONHASHSEED
at org
.apache
.spark
.api
.python
.PythonRunner$$anon$
1.read(PythonRDD
.scala
:193)
at org
.apache
.spark
.api
.python
.PythonRunner$$anon$
1.<init>(PythonRDD
.scala
:234)
at org
.apache
.spark
.api
.python
.PythonRunner
.compute(PythonRDD
.scala
:152)
at org
.apache
.spark
.api
.python
.PythonRDD
.compute(PythonRDD
.scala
:63)
at org
.apache
.spark
.rdd
.RDD
.computeOrReadCheckpoint(RDD
.scala
:323)
at org
.apache
.spark
.rdd
.RDD
.iterator(RDD
.scala
:287)
at org
.apache
.spark
.api
.python
.PairwiseRDD
.compute(PythonRDD
.scala
:390)
at org
.apache
.spark
.rdd
.RDD
.computeOrReadCheckpoint(RDD
.scala
:323)
at org
.apache
.spark
.rdd
.RDD
.iterator(RDD
.scala
:287)
at org
.apache
.spark
.scheduler
.ShuffleMapTask
.runTask(ShuffleMapTask
.scala
:96)
at org
.apache
.spark
.scheduler
.ShuffleMapTask
.runTask(ShuffleMapTask
.scala
:53)
at org
.apache
.spark
.scheduler
.Task
.run(Task
.scala
:99)
at org
.apache
.spark
.executor
.Executor$TaskRunner
.run(Executor
.scala
:322)
at java
.util
.concurrent
.ThreadPoolExecutor
.runWorker(ThreadPoolExecutor
.java
:1142)
at java
.util
.concurrent
.ThreadPoolExecutor$Worker
.run(ThreadPoolExecutor
.java
:617)
... 1 more
二、问题原因
经过排查,发现是python 的 PYTHONHASHSEED 环境变量引起的问题。 新版的python为了安全性,在hash计算时加入随加盐,在单会话下,加的盐是一致的,所以hash出来的结果一致。而在集群多台服务器一起计算时,每台服务器有各自的会话,大家都加随机加盐,结果导致各自的hash结果不一致。从而引起程序报错。
三、解决方案
解决方法很简单,关闭随机加盐,或设置相同的盐值。 方法一:
conf
= SparkConf
()
conf
.set("spark.executorEnv.PYTHONHASHSEED","0");
方法二: 修改spark-defaults.conf,增加
spark
.executorEnv
.PYTHONHASHSEED=0