Spark分析文件rent_analyse.py
改变Spark读取csv文件的写法
sparkContext = SparkContext("local","rent_analyse") sqlContext = SQLContext(sparkContext) df = sqlContext.read.format('com.databricks.spark.csv').options(header='true', inferschema='true').load(filename)原写法会报连接错误,同时在读取csv文件时需要添加format参数,否则又会报错
在读取csv文件时文件路径需要加上file:///,如:
"file:///develop/sparkSpace/rent.csv"如果文件参数默认为文件名,则Spark会到Hadoop的文件系统里读取数据,路径为:
"hdfs://localhost:9000/user/root/rent.csv"同时pyecharts由于版本问题,现在的版本其用法已经更新,具体可访问https://github.com/pyecharts/pyecharts/
原项目文件:
# -*- coding: utf-8 -*- from pyecharts import Bar def draw_bar(all_list): print("开始绘图") attr = ["海沧", "湖里", "集美", "思明", "翔安", "同安"] v0 = all_list[0] v1 = all_list[1] v2 = all_list[2] v3 = all_list[3] bar = Bar("厦门市租房租金概况") bar.add("最小值", attr, v0, is_stack=True) bar.add("最大值", attr, v1, is_stack=True) bar.add("平均值", attr, v2, is_stack=True) bar.add("中位数", attr, v3, is_stack=True) bar.render() print("结束绘图")更改后的项目文件:
# -*- coding: utf-8 -*- from pyecharts.charts import Bar from pyecharts import options as opts def draw_bar(all_list): print("开始绘图") attr = ["海沧", "湖里", "集美", "思明", "翔安", "同安"] v0 = all_list[0] v1 = all_list[1] v2 = all_list[2] v3 = all_list[3] bar = Bar() bar.add_xaxis(attr) bar.add_yaxis("最小值",v0) bar.add_yaxis("最大值",v1) bar.add_yaxis("平均值",v2) bar.add_yaxis("中位数",v3) bar.set_global_opts(title_opts=opts.TitleOpts(title="厦门市租房租金概况")) # bar.add("最小值", attr, v0, is_stack=True) # bar.add("最大值", attr, v1, is_stack=True) # bar.add("平均值", attr, v2, is_stack=True) # bar.add("中位数", attr, v3, is_stack=True) bar.render() print("结束绘图")