我完全迷失在一个有线的环境中。现在我有一个列表li
li = example_data.map(lambda x: get_labeled_prediction(w,x)).collect()
print li, type(li)
输出类似
[(0.0, 59.0), (0.0, 51.0), (0.0, 81.0), (0.0, 8.0), (0.0, 86.0), (0.0, 86.0), (0.0, 60.0), (0.0, 54.0), (0.0, 54.0), (0.0, 84.0)] <type 'list'>
当我尝试从这个列表创建一个数据帧时:
m = sqlContext.createDataFrame(l, ["prediction", "label"])
它抛出了错误消息:
TypeError Traceback (most recent call last)
<ipython-input-90-4a49f7f67700> in <module>()
56 l = example_data.map(lambda x: get_labeled_prediction(w,x)).collect()
57 print l, type(l)
---> 58 m = sqlContext.createDataFrame(l, ["prediction", "label"])
59 '''
60 g = example_data.map(lambda x:gradient_summand(w, x)).sum()
/databricks/spark/python/pyspark/sql/context.py in createDataFrame(self, data, schema, samplingRatio)
423 rdd, schema = self._createFromRDD(data, schema, samplingRatio)
424 else:
--> 425 rdd, schema = self._createFromLocal(data, schema)
426 jrdd = self._jvm.SerDeUtil.toJavaArray(rdd._to_java_object_rdd())
427 jdf = self._ssql_ctx.applySchemaToPythonRDD(jrdd.rdd(), schema.json())
/databricks/spark/python/pyspark/sql/context.py in _createFromLocal(self, data, schema)
339
340 if schema is None or isinstance(schema, (list, tuple)):
--> 341 struct = self._inferSchemaFromList(data)
342 if isinstance(schema, (list, tuple)):
343 for i, name in enumerate(schema):
/databricks/spark/python/pyspark/sql/context.py in _inferSchemaFromList(self, data)
239 warnings.warn("inferring schema from dict is deprecated,"
240 "please use pyspark.sql.Row instead")
--> 241 schema = reduce(_merge_type, map(_infer_schema, data))
242 if _has_nulltype(schema):
243 raise ValueError("Some of types cannot be determined after inferring")
/databricks/spark/python/pyspark/sql/types.py in _infer_schema(row)
831 raise TypeError("Can not infer schema for type: %s" % type(row))
832
--> 833 fields = [StructField(k, _infer_type(v), True) for k, v in items]
834 return StructType(fields)
835
/databricks/spark/python/pyspark/sql/types.py in _infer_type(obj)
808 return _infer_schema(obj)
809 except TypeError:
--> 810 raise TypeError("not supported type: %s" % type(obj))
811
812
TypeError: not supported type: <type 'numpy.float64'>
但是当我硬编码这个列表行:
tt = sqlContext.createDataFrame([(0.0, 59.0), (0.0, 51.0), (0.0, 81.0), (0.0, 8.0), (0.0, 86.0), (0.0, 86.0), (0.0, 60.0), (0.0, 54.0), (0.0, 54.0), (0.0, 84.0)], ["prediction", "label"])
tt.collect()
它运行良好。
[Row(prediction=0.0, label=59.0),
Row(prediction=0.0, label=51.0),
Row(prediction=0.0, label=81.0),
Row(prediction=0.0, label=8.0),
Row(prediction=0.0, label=86.0),
Row(prediction=0.0, label=86.0),
Row(prediction=0.0, label=60.0),
Row(prediction=0.0, label=54.0),
Row(prediction=0.0, label=54.0),
Row(prediction=0.0, label=84.0)]
是什么导致了这个问题,如何解决?任何提示都将不胜感激。
您有一个list of float64
,我认为它不喜欢那种类型。另一方面,当你对它进行硬编码时,它只是一个list of float
下面是一个关于如何从numpy的数据类型转换为python的原生数据类型的问题和答案。
我遇到过这个问题,下面是我使用"float()"转换类型的解决方案:
1.一开始,它的类型是np.float64
my_rdd.collect()
output ==> [2.8,3.9,1.2]
2.将类型转换为python float
my_convert=my_rdd.map(lambda x: (float(x),)).collect()
output ==> [(2.8,),(3.9,),(1.2,)]
3.无错误再次升高
sqlContext.createDataFrame(my_convert).show()
4.对于您的样品,我建议:
li = example_data.map(lambda x: get_labeled_prediction(w,x)).map(lambda y:(float(y[0]),float(y[1]))).collect()