java.lang.classnotfoundexception:com.datastax.spark.connector.rdd.partitioner.cassandrapartition

lrpiutwd  于 2021-06-13  发布在  Cassandra
关注(0)|答案(1)|浏览(282)

我已经与Cassandra工作了一段时间,现在我正在尝试设置Spark和SparkCassandra连接器。我在windows10中使用intellijidea来实现这一点(也是第一次使用intellijidea和scala)。
构建.gradle

apply plugin: 'scala'
apply plugin: 'idea'
apply plugin: 'eclipse'

repositories {
    mavenCentral()

    flatDir {
        dirs 'runtime libs'
    }
}

idea {
    project {
        jdkName = '1.8'
        languageLevel = '1.8'
    }
}

dependencies {
    compile group: 'org.apache.spark', name: 'spark-core_2.11', version: '2.4.5'
    compile group: 'org.apache.spark', name: 'spark-sql_2.11', version: '2.4.5'
    compile group: 'org.scala-lang', name: 'scala-library', version: '2.11.12'
    compile group: 'com.datastax.spark', name: 'spark-cassandra-connector_2.11', version: '2.5.0'
    compile group: 'log4j', name: 'log4j', version: '1.2.17'
}

configurations.all {
    resolutionStrategy {
        force 'com.google.guava:guava:12.0.1'
    }
}

compileScala.targetCompatibility = "1.8"
compileScala.sourceCompatibility = "1.8"

jar {
    zip64 true
    archiveName = "ModuleName.jar"
    from {
        configurations.compile.collect {
            it.isDirectory() ? it : zipTree(it)
        }
    }
    manifest {
        attributes 'Main-Class': 'org.module.ModuelName'
    }
    exclude 'META-INF/*.RSA', 'META-INF/*.SF', 'META-INF/*.DSA'

}

模块名称.scala

package org.module
import org.apache.spark.sql.SparkSession
import com.datastax.spark.connector._
import org.apache.spark.sql.types.TimestampType

object SentinelSparkModule {

  case class Document(id: Int, time: TimestampType, data: String)

  def main(args: Array[String]) {
    val spark = SparkSession.builder
      .master("spark://192.168.0.3:7077")
      .appName("App")
      .config("spark.cassandra.connection.host", "127.0.0.1")
      .config("spark.cassandra.connection.port", "9042")
      .getOrCreate()

    //I'm trying it without [Document] since it throws 'Failed to map constructor parameter id in
    //org.module.ModuleName.Document to a column of keyspace.table'

    val documentRDD = spark.sparkContext
      .cassandraTable/*[Document]*/("keyspace", "table")
      .select()
    documentRDD.take(10).foreach(println)
    spark.stop()
 }
}

我有一个跑步Spark大师spark://192.168.0.3:7077和那个师傅的工人,但我没试过 submit 作为控制台中的编译jar,我只是想让它在ide中工作。
谢谢

ylamdve6

ylamdve61#

cassandra连接器jar需要添加到workers的类路径中。一种方法是构建一个包含所有必需依赖项的uberjar并提交到集群。
请参阅:使用gradle构建uberjar
另外,一定要换新的 scope 从中生成文件的依赖项 compileprovided 除Cassandra连接器外的所有jar。
参考文献:https://reflectoring.io/maven-scopes-gradle-configurations/

相关问题