这是Python(Spark 1.6 +)中使用Dataframe API的示例实现。
import pyspark.sql.functions as F
import numpy as np
from pyspark.sql.types import FloatType
假设我们在“工资” spark数据帧中有客户的月薪,例如:
并且我们希望找到所有客户在整个月的平均工资
def find_median(values_list):
try:
median = np.median(values_list) #get the median of values in a list in each row
return round(float(median),2)
except Exception:
return None #if there is anything wrong with the given values
median_finder = F.udf(find_median,FloatType())
第2步:通过将薪金列收集到每一行的薪金列表中,进行汇总:
salaries_list = salaries.groupBy("customer_id").agg(F.collect_list("salary").alias("salaries"))
步骤3:在salaries列上调用mean_finder udf并将中位数添加为新列
salaries_list = salaries_list.withColumn("median",median_finder("salaries"))