我似乎不能写一个JavaRDD<T>
,其中T是一个说法,Person
类。我将其定义为
public class Person implements Serializable
{
private static final long serialVersionUID = 1L;
private String name;
private String age;
private Address address;
....
与Address
:
public class Address implements Serializable
{
private static final long serialVersionUID = 1L;
private String City; private String Block;
...<getters and setters>
然后我创建一个如下所示的JavaRDD
:
JavaRDD<Person> people = sc.textFile("/user/johndoe/spark/data/people.txt").map(new Function<String, Person>()
{
public Person call(String line)
{
String[] parts = line.split(",");
Person person = new Person();
person.setName(parts[0]);
person.setAge("2");
Address address = new Address("HomeAdd","141H");
person.setAddress(address);
return person;
}
});
注意 - 我正在手动设置Address
所有人相同。这基本上是一个嵌套的RDD。尝试将其另存为镶木地板文件时:
DataFrame dfschemaPeople = sqlContext.createDataFrame(people, Person.class);
dfschemaPeople.write().parquet("/user/johndoe/spark/data/out/people.parquet");
地址类为:
import java.io.Serializable;
public class Address implements Serializable
{
public Address(String city, String block)
{
super();
City = city;
Block = block;
}
private static final long serialVersionUID = 1L;
private String City;
private String Block;
//Omitting getters and setters
}
我遇到错误:
原因:java.lang.ClassCastException:com.test.schema.Address 无法强制转换为 org.apache.spark.sql.Row
我正在运行火花-1.4.1。
- 这是一个已知的错误吗?
- 如果我通过导入相同格式的嵌套 JSON 文件来执行相同的操作,则可以保存到镶木地板。
- 即使我创建了一个子数据帧,例如:
DataFrame dfSubset = sqlContext.sql("SELECT address.city FROM PersonTable");
我仍然得到同样的错误
那么什么给了呢?如何从文本文件中读取复杂的数据结构并另存为镶木地板?看来我做不到。
您正在使用有限制的 java API
来自 Spark 文档:http://spark.apache.org/docs/1.4.1/sql-programming-guide.html#interoperating-with-rdds
Spark SQL支持自动将JavaBeans的RDD转换为DataFrame。使用反射获取的 BeanInfo定义了表的架构。目前,Spark SQL不支持包含嵌套或包含复杂类型(如列表或数组)的JavaBeans。您可以通过创建一个实现 Serializable 并为其所有字段提供 getter 和 setter 的类来创建 JavaBean。使用Scala案例类,它将工作(更新为写入Parquet格式)
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import org.apache.spark.rdd.RDD
case class Address(city:String, block:String);
case class Person(name:String,age:String, address:Address);
object Test2 {
def main(args: Array[String]): Unit = {
val conf = new SparkConf().setAppName("Simple Application").setMaster("local");
val sc = new SparkContext(conf)
val sqlContext = new org.apache.spark.sql.SQLContext(sc);
import sqlContext.implicits._
val people = sc.parallelize(List(Person("a", "b", Address("a", "b")), Person("c", "d", Address("c", "d"))));
val df = sqlContext.createDataFrame(people);
df.write.mode("overwrite").parquet("/tmp/people.parquet")
}
}