SparkSQL的代码示例分析
这篇文章跟大家分析一下“Spark SQL的代码示例分析”。内容详细易懂,对“Spark SQL的代码示例分析”感兴趣的朋友可以跟着小编的思路慢慢深入来阅读一下,希望阅读后能够对大家有所帮助。下面跟着小编一起深入学习“Spark SQL的代码示例分析”的知识吧。
参考官网Spark SQL的例子,自己写了一个脚本:
valsqlContext=neworg.apache.spark.sql.SQLContext(sc) importsqlContext.createSchemaRDD caseclassUserLog(userid:String,time1:String,platform:String,ip:String,openplatform:String,appid:String) //CreateanRDDofPersonobjectsandregisteritasatable. valuser=sc.textFile("/user/hive/warehouse/api_db_user_log/dt=20150517/*").map(_.split("\\^")).map(u=>UserLog(u(0),u(1),u(2),u(3),u(4),u(5))) user.registerTempTable("user_log") //SQLstatementscanberunbyusingthesqlmethodsprovidedbysqlContext. valallusers=sqlContext.sql("SELECT*FROMuser_log") //TheresultsofSQLqueriesareSchemaRDDsandsupportallthenormalRDDoperations. //Thecolumnsofarowintheresultcanbeaccessedbyordinal. allusers.map(t=>"UserId:"+t(0)).collect().foreach(println)
结果执行出错:
org.apache.spark.SparkException:Jobabortedduetostagefailure:Task1instage50.0failed1times,mostrecentfailure:Losttask1.0instage50.0(TID73,localhost):java.lang.ArrayIndexOutOfBoundsException:5 at$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(<console>:30) at$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(<console>:30) atscala.collection.Iterator$$anon$11.next(Iterator.scala:328) atorg.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1319) atorg.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:910) atorg.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:910) atorg.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1319) atorg.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1319) atorg.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61) atorg.apache.spark.scheduler.Task.run(Task.scala:56) atorg.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196) atjava.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) atjava.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) atjava.lang.Thread.run(Thread.java:745)
从日志可以看出,是数组越界了。
用命令
sc.textFile("/user/hive/warehouse/api_db_user_log/dt=20150517/*").map(_.split("\\^")).foreach(x=>println(x.size))
发现有一行记录split出来的大小是“5”
6 6 6 6 6 6 6 6 6 6 15/05/2120:47:37INFOExecutor:Finishedtask0.0instage2.0(TID4).1774bytesresultsenttodriver 6 6 6 6 6 6 5 6 15/05/2120:47:37INFOExecutor:Finishedtask1.0instage2.0(TID5).1774bytesresultsenttodriver
原因是这行记录有空值“44671799^2015-03-27 20:56:05^2^117.93.193.238^0^^”
网上找到了解决办法——使用split(str,int)函数。修改后代码:
valsqlContext=neworg.apache.spark.sql.SQLContext(sc) importsqlContext.createSchemaRDD caseclassUserLog(userid:String,time1:String,platform:String,ip:String,openplatform:String,appid:String) //CreateanRDDofPersonobjectsandregisteritasatable. valuser=sc.textFile("/user/hive/warehouse/api_db_user_log/dt=20150517/*").map(_.split("\\^",-1)).map(u=>UserLog(u(0),u(1),u(2),u(3),u(4),u(5))) user.registerTempTable("user_log") //SQLstatementscanberunbyusingthesqlmethodsprovidedbysqlContext. valallusers=sqlContext.sql("SELECT*FROMuser_log") //TheresultsofSQLqueriesareSchemaRDDsandsupportallthenormalRDDoperations. //Thecolumnsofarowintheresultcanbeaccessedbyordinal. allusers.map(t=>"UserId:"+t(0)).collect().foreach(println)
关于Spark SQL的代码示例分析就分享到这里啦,希望上述内容能够让大家有所提升。如果想要学习更多知识,请大家多多留意小编的更新。谢谢大家关注一下博信网站!
版权声明
本文仅代表作者观点,不代表博信信息网立场。