Conda可以帮助正确管理很多依赖项…
安装火花。假设spark安装在/ opt / spark中,请将其包含在?/ .bashrc中:
export SPARK_HOME=/opt/spark
export PATH=$SPARK_HOME/bin:$PATH
创建一个conda环境,其中包含除spark之外的所有必需依赖项:
conda create -n findspark-jupyter-openjdk8-py3 -c conda-forge python=3.5 jupyter=1.0 notebook=5.0 openjdk=8.0.144 findspark=1.1.0
激活环境
$ source activate findspark-jupyter-openjdk8-py3
启动Jupyter Notebook服务器:
$ jupyter notebook
试着用下面的脚本(从借来的计算PI这个)
import findspark
findspark.init()
import pyspark
import random
sc = pyspark.SparkContext(appName="Pi")
num_samples = 100000000
def inside(p):
x, y = random.random(), random.random()
return x*x + y*y < 1
count = sc.parallelize(range(0, num_samples)).filter(inside).count()
pi = 4 * count / num_samples
print(pi)
sc.stop()