如何修复sbt项目中的scala.tools.nsc.typechecker.Contexts$Context.imports(Contexts.scala:232)?

pnwntuvh  于 2023-04-21  发布在  Scala
关注(0)|答案(1)|浏览(135)

问题在于以下错误,
[error] at scala.tools.nsc.typechecker.Typers$Typer.typedApply$1(Typers.scala:4580)[error] at scala.tools.nsc.typechecker.Typers$Typer.typedInAnyMode$1(Typers.scala:5343)[error] at scala.tools.nsc.typechecker.Typers$Typer.typed1(Typers.scala:5360)[error] at scala.tools.nsc.typechecker.Typers$Typer.runTyper$1(Typers.scala:5396)[error](Compile / compileIncremental)java.lang.StackOverflowError [error] Total time:11 s,完成时间:2019年4月25日下午7:11:28
我也试图增加jmx参数javaOptions ++= Seq(“-Xms 512 M”,“-Xmx 4048 M”,“-XX:MaxPermSize= 4048 M”,“-XX:+CMSClassUnloadingEnabled”),但没有帮助。所有的依赖关系似乎都能正确解决,但这个错误有点奇怪。

build.properties
sbt.version=1.2.8
plugin.sbt
addSbtPlugin("com.typesafe.sbteclipse" % "sbteclipse-plugin" % "5.2.4")
addSbtPlugin("org.scoverage" % "sbt-scoverage" % "1.5.1")
addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.9")
And the build.sbt

name := "ProjectNew"

version := "4.0"

scalaVersion := "2.11.8"


fork := true

libraryDependencies ++= Seq(
  "org.scalaz" %% "scalaz-core" % "7.1.0" % "test",
  ("org.apache.spark" %% "spark-core" % "2.1.0.cloudera1").
    exclude("org.mortbay.jetty", "servlet-api").
    exclude("commons-beanutils", "commons-beanutils-core").
    //exclude("commons-collections", "commons-collections").
    exclude("com.esotericsoftware.minlog", "minlog").
    //exclude("org.apache.hadooop","hadoop-client").
    exclude("commons-logging", "commons-logging") % "provided",
  ("org.apache.spark" %% "spark-sql" % "2.1.0.cloudera1")
    .exclude("com.esotericsoftware.minlog","minlog")
    //.exclude("org.apache.hadoop","hadoop-client")
     % "provided",
  ("org.apache.spark" %% "spark-hive" % "2.1.0.cloudera1")
    .exclude("com.esotericsoftware.minlog","minlog")
    //.exclude("org.apache.hadoop","hadoop-client")
     % "provided",
  "spark.jobserver" % "job-server-api" % "0.4.0",
  "org.scalatest" %%"scalatest" % "2.2.4" % "test",
   "com.github.nscala-time" %% "nscala-time" % "1.6.0"  
 )

//libraryDependencies ++= Seq(
//  "org.apache.spark" %% "spark-core" % "1.5.0-cdh5.5.0" % "provided",
//  "org.apache.spark" %% "spark-sql" % "1.5.0-cdh5.5.0" % "provided",
//  "org.scalatest"%"scalatest_2.10" % "2.2.4" % "test",
//  "com.github.nscala-time" %% "nscala-time" % "1.6.0"  
// )

  resolvers ++= Seq(
    "cloudera" at "http://repository.cloudera.com/artifactory/cloudera-repos/",
    "Job Server Bintray" at "http://dl.bintray.com/spark-jobserver/maven"
)
scalacOptions ++= Seq("-unchecked", "-deprecation")

assemblyMergeStrategy in assembly := {
  case PathList("META-INF", xs @ _*) => MergeStrategy.discard
  case x => MergeStrategy.first
}

parallelExecution in Test := false

fork in Test := true

javaOptions ++= Seq("-Xms512M", "-Xmx4048M", "-XX:MaxPermSize=4048M", "-XX:+CMSClassUnloadingEnabled")
2cmtqfgy

2cmtqfgy1#

这是一个内存问题。我提到了以下答案:在我的C:\Program Files(x86)\sbt\conf\sbtconfig文件中,我增加了以下内存参数。

-Xmx2G
-XX:MaxPermSize=1000m
-XX:ReservedCodeCacheSize=1000m
-Xss8M

运行sbt包已经无缝工作,编译成功。谢谢大家。

相关问题