cassandra的cqlinputformat未能在scala中构建,但适用于java

mkshixfv  于 2021-06-15  发布在  Cassandra
关注(0)|答案(1)|浏览(416)

我的spark scala代码如下:

val input = sc.newAPIHadoopRDD(jconf, classOf[CqlInputFormat], classOf[LongWritable], classOf[Row])

cqlinputformat类是在cassandra的源代码中实现的。我试着把它转换成java代码,结果成功了。但它无法用scala代码构建。

inferred type arguments[org.apache.hadoop.io.LongWritable,com.datastax.driver.core.Row,org.apache.cassandra.hadoop.cql3.CqlInputFormat] do not conform to method newAPIHadoopRDD's type parameter bounds [K,V,F <: org.apache.hadoop.mapreduce.InputFormat[K,V]]
[error]         val input = sc.newAPIHadoopRDD(jconf, classOf[CqlInputFormat], classOf[LongWritable], classOf[Row]);

[error] /home/project/past/experiments/query/SparkApp/src/main/scala/SparkReader.scala:46: type mismatch;
[error]  found   : Class[org.apache.cassandra.hadoop.cql3.CqlInputFormat](classOf[org.apache.cassandra.hadoop.cql3.CqlInputFormat])
[error]  required: Class[F]
[error]         val input = sc.newAPIHadoopRDD(jconf, classOf[CqlInputFormat], classOf[LongWritable], classOf[Row]);
[error]                                                      ^
[error] /home/project/past/experiments/query/SparkApp/src/main/scala/SparkReader.scala:46: type mismatch;
[error]  found   : Class[org.apache.hadoop.io.LongWritable](classOf[org.apache.hadoop.io.LongWritable])
[error]  required: Class[K]
[error]         val input = sc.newAPIHadoopRDD(jconf, classOf[CqlInputFormat], classOf[LongWritable], classOf[Row]);
[error]                                                                               

[error] /home/project/past/experiments/query/SparkApp/src/main/scala/SparkReader.scala:46: type mismatch;
[error]  found   : Class[com.datastax.driver.core.Row](classOf[com.datastax.driver.core.Row])
[error]  required: Class[V]
[error]  val input = sc.newAPIHadoopRDD(jconf, classOf[CqlInputFormat], classOf[LongWritable], classOf[Row]);
[error]                                                                                                      
[error] four errors found
[error] (compile:compileIncremental) Compilation failed

有什么建议吗?谢谢您。

t5fffqht

t5fffqht1#

如果您使用的是spark,则需要使用spark cassandra连接器,而不是使用hadoop集成。最好使用Dataframe。。。
我建议参加ds320课程,以了解有关spark+cassandra的更多信息。

相关问题