json解析Dataframe(insertdf)的格式如下:
+------------------------------------------------+-------------+------------+
Cls_details | prdct_cde |roll_nbr
+------------------------------------------------+-------------+------------+
{"key1":"value1","key2":"value2","key3":"value3"}|DKLA |123453
{"key1":"value1","key2":"value2","key3":"value3"}|GHTD |123454
{"key1":"value1","key2":"value2","key3":"value3"}|ILDDA |123455
我想把这个Dataframe保存到一个cassandra表中。使用的spark scala代码是:
insertDF
.write.format("org.apache.spark.sql.cassandra")
.options(Map("keyspace" -> "abc", "table" -> "def"))
.mode(SaveMode.Append).save()
但我得到了一个错误:
com.datastax.spark.connector.types.TypeConversionException: Cannot convert object {"key1":"value1","key2":"value2","key3":"value3"} of type class java.lang.String to Map[AnyRef,AnyRef].
请提供spark scala的解决方案。任何帮助都将不胜感激。
暂无答案!
目前还没有任何答案,快来回答吧!