如何在spark中关闭信息记录?

2guxujil  于 2021-06-04  发布在  Hadoop
关注(0)|答案(15)|浏览(475)

我安装了Spark使用AWSEC2指南,我可以启动程序罚款使用 bin/pyspark 脚本,以获得Spark提示,也可以做快速启动quide成功。
然而,我无法为我的生命想出如何停止所有的罗嗦 INFO 在每个命令之后记录。
我已经尝试了几乎所有可能的场景在下面的代码中(注解掉,设置为关闭)在我的 log4j.properties 文件在 conf 我启动应用程序的文件夹,以及在每个节点上,什么都不做。我还是有日志记录 INFO 执行每条语句后打印语句。
我很不明白这是怎么回事。


# Set everything to be logged to the console log4j.rootCategory=INFO, console

log4j.appender.console=org.apache.log4j.ConsoleAppender 
log4j.appender.console.target=System.err     
log4j.appender.console.layout=org.apache.log4j.PatternLayout 
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n

# Settings to quiet third party logs that are too verbose

log4j.logger.org.eclipse.jetty=WARN
log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFO
log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO

这是我使用 SPARK_PRINT_LAUNCH_COMMAND :
spark命令:/library/java/javavirtualmachines/jdk1.8.0\u 05.jdk/contents/home/bin/java-cp:/root/spark-1.0.1-bin-hadoop2/conf:/root/spark-1.0.1-bin-hadoop2/conf:/root/spark-1.0.1-bin-hadoop2/lib/spark-assembly-1.0.1-hadoop2.0.jar:/root/spark-1.0.1-bin-hadoop2/lib/datanucleus-api-jdo-3.2.1.jar:/root/spark-1.0.1-bin-hadoop2/lib/datanucleus-rdbms-3.2.1.jar-xx:maxpermsize=128m-djava.library.path=-xms512m-xmx512m org.apache.spark.deploy.sparksubmit spark shell—类org.apache.spark.repl.main
的内容 spark-env.sh :


# !/usr/bin/env bash

# This file is sourced when running various Spark programs.

# Copy it as spark-env.sh and edit that to configure Spark for your site.

# Options read when launching programs locally with

# ./bin/run-example or ./bin/spark-submit

# - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files

# - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node

# - SPARK_PUBLIC_DNS, to set the public dns name of the driver program

# - SPARK_CLASSPATH=/root/spark-1.0.1-bin-hadoop2/conf/

# Options read by executors and drivers running inside the cluster

# - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node

# - SPARK_PUBLIC_DNS, to set the public DNS name of the driver program

# - SPARK_CLASSPATH, default classpath entries to append

# - SPARK_LOCAL_DIRS, storage directories to use on this node for shuffle and RDD data

# - MESOS_NATIVE_LIBRARY, to point to your libmesos.so if you use Mesos

# Options read in YARN client mode

# - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files

# - SPARK_EXECUTOR_INSTANCES, Number of workers to start (Default: 2)

# - SPARK_EXECUTOR_CORES, Number of cores for the workers (Default: 1).

# - SPARK_EXECUTOR_MEMORY, Memory per Worker (e.g. 1000M, 2G) (Default: 1G)

# - SPARK_DRIVER_MEMORY, Memory for Master (e.g. 1000M, 2G) (Default: 512 Mb)

# - SPARK_YARN_APP_NAME, The name of your application (Default: Spark)

# - SPARK_YARN_QUEUE, The hadoop queue to use for allocation requests (Default: ‘default’)

# - SPARK_YARN_DIST_FILES, Comma separated list of files to be distributed with the job.

# - SPARK_YARN_DIST_ARCHIVES, Comma separated list of archives to be distributed with the job.

# Options for the daemons used in the standalone deploy mode:

# - SPARK_MASTER_IP, to bind the master to a different IP address or hostname

# - SPARK_MASTER_PORT / SPARK_MASTER_WEBUI_PORT, to use non-default ports for the master

# - SPARK_MASTER_OPTS, to set config properties only for the master (e.g. "-Dx=y")

# - SPARK_WORKER_CORES, to set the number of cores to use on this machine

# - SPARK_WORKER_MEMORY, to set how much total memory workers have to give executors (e.g. 1000m, 2g)

# - SPARK_WORKER_PORT / SPARK_WORKER_WEBUI_PORT, to use non-default ports for the worker

# - SPARK_WORKER_INSTANCES, to set the number of worker processes per node

# - SPARK_WORKER_DIR, to set the working directory of worker processes

# - SPARK_WORKER_OPTS, to set config properties only for the worker (e.g. "-Dx=y")

# - SPARK_HISTORY_OPTS, to set config properties only for the history server (e.g. "-Dx=y")

# - SPARK_DAEMON_JAVA_OPTS, to set config properties for all daemons (e.g. "-Dx=y")

# - SPARK_PUBLIC_DNS, to set the public dns name of the master or workers

export SPARK_SUBMIT_CLASSPATH="$FWDIR/conf"
ovfsdjhp

ovfsdjhp1#

对于pyspark,还可以使用 sc.setLogLevel("FATAL") . 从文档中:
控制我们的水平。这将覆盖任何用户定义的日志设置。有效的日志级别包括:all、debug、error、fatal、info、off、trace、warn

ca1c2owp

ca1c2owp2#

如果您想继续使用logging(针对python的logging工具),您可以尝试拆分应用程序和spark的配置:

LoggerManager()
logger = logging.getLogger(__name__)
loggerSpark = logging.getLogger('py4j')
loggerSpark.setLevel('WARNING')
qqrboqgw

qqrboqgw3#

我的做法是:
在我运行 spark-submit 脚本do

$ cp /etc/spark/conf/log4j.properties .
$ nano log4j.properties

改变 INFO 到您想要的日志记录级别,然后运行 spark-submit

mftmpeh8

mftmpeh84#

程序化方式

spark.sparkContext.setLogLevel("WARN")

可用选项

ERROR
WARN 
INFO
rsl1atfo

rsl1atfo5#

>>> log4j = sc._jvm.org.apache.log4j
>>> log4j.LogManager.getRootLogger().setLevel(log4j.Level.ERROR)
zsohkypk

zsohkypk6#

只需将下面的param添加到spark submit命令中

--conf "spark.driver.extraJavaOptions=-Dlog4jspark.root.logger=WARN,console"

这只会暂时覆盖该作业的系统值。检查log4j.properties文件中的确切属性名(此处为log4jspark.root.logger)。
希望这有帮助,干杯!

42fyovps

42fyovps7#

灵感来源于我做的pyspark/tests.py

def quiet_logs(sc):
    logger = sc._jvm.org.apache.log4j
    logger.LogManager.getLogger("org"). setLevel( logger.Level.ERROR )
    logger.LogManager.getLogger("akka").setLevel( logger.Level.ERROR )

在创建sparkcontext之后调用它,将为我的测试记录的stderr行从2647减少到163。但是,创建sparkcontext本身需要163个日志,最多

15/08/25 10:14:16 INFO SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0

我也不清楚如何通过编程来调整这些。

klsxnrf1

klsxnrf18#

这可能是由于spark是如何计算其类路径的。我的预感是hadoop的 log4j.properties 文件出现在类路径上spark的前面,阻止更改生效。
如果你跑了

SPARK_PRINT_LAUNCH_COMMAND=1 bin/spark-shell

然后spark将打印用于启动shell的完整类路径;就我而言,我明白了

Spark Command: /usr/lib/jvm/java/bin/java -cp :::/root/ephemeral-hdfs/conf:/root/spark/conf:/root/spark/lib/spark-assembly-1.0.0-hadoop1.0.4.jar:/root/spark/lib/datanucleus-api-jdo-3.2.1.jar:/root/spark/lib/datanucleus-core-3.2.2.jar:/root/spark/lib/datanucleus-rdbms-3.2.1.jar -XX:MaxPermSize=128m -Djava.library.path=:/root/ephemeral-hdfs/lib/native/ -Xms512m -Xmx512m org.apache.spark.deploy.SparkSubmit spark-shell --class org.apache.spark.repl.Main

哪里 /root/ephemeral-hdfs/conf 在类路径的最前面。
我已经打开了一个问题[spark-2913]来在下一个版本中修复这个问题(我应该很快就会有一个补丁出来)。
同时,这里有几个解决方法:
添加 export SPARK_SUBMIT_CLASSPATH="$FWDIR/conf"spark-env.sh .
删除(或重命名) /root/ephemeral-hdfs/conf/log4j.properties .

r1wp621o

r1wp621o9#

您可以使用setloglevel

val spark = SparkSession
      .builder()
      .config("spark.master", "local[1]")
      .appName("TestLog")
      .getOrCreate()

spark.sparkContext.setLogLevel("WARN")
gijlo24d

gijlo24d10#

在spark 2.0中,您还可以使用setloglevel为应用程序动态配置它:

from pyspark.sql import SparkSession
    spark = SparkSession.builder.\
        master('local').\
        appName('foo').\
        getOrCreate()
    spark.sparkContext.setLogLevel('WARN')

在pyspark控制台中,默认 spark 会话已可用。

hk8txs48

hk8txs4811#

我在AmazonEC2上使用了这个,有1个主服务器和2个从服务器,还有Spark1.2.1。


# Step 1. Change config file on the master node

nano /root/ephemeral-hdfs/conf/log4j.properties

# Before

hadoop.root.logger=INFO,console

# After

hadoop.root.logger=WARN,console

# Step 2. Replicate this change to slaves

~/spark-ec2/copy-dir /root/ephemeral-hdfs/conf/
jk9hmnmh

jk9hmnmh12#

Spark1.6.2:

log4j = sc._jvm.org.apache.log4j
log4j.LogManager.getRootLogger().setLevel(log4j.Level.ERROR)

spark 2.x版:

spark.sparkContext.setLogLevel('WARN')

(Spark就是Spark)
或者旧方法,
重命名 conf/log4j.properties.templateconf/log4j.properties 在Spark方向。
log4j.properties ,更改 log4j.rootCategory=INFO, consolelog4j.rootCategory=WARN, console 不同的可用日志级别:
关闭(最具体,不记录)
致命(最具体,数据很少)
错误-仅在出现错误时记录
警告-仅在出现警告或错误时记录
信息(默认)
调试-日志详细信息步骤(以及上述所有日志)
跟踪(最不具体,大量数据)
全部(最不特定,全部数据)

7fyelxc5

7fyelxc513#

只需在spark目录中执行以下命令:

cp conf/log4j.properties.template conf/log4j.properties

编辑log4j.properties:


# Set everything to be logged to the console

log4j.rootCategory=INFO, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n

# Settings to quiet third party logs that are too verbose

log4j.logger.org.eclipse.jetty=WARN
log4j.logger.org.eclipse.jetty.util.component.AbstractLifeCycle=ERROR
log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFO
log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO

替换第一行:

log4j.rootCategory=INFO, console

签署人:

log4j.rootCategory=WARN, console

保存并重新启动shell。它适用于我在OSX上的spark 1.1.0和spark 1.5.1。

sqserrrh

sqserrrh14#

编辑conf/log4j.properties文件并更改以下行:

log4j.rootCategory=INFO, console

log4j.rootCategory=ERROR, console

另一种方法是:
点火Spark壳和类型如下:

import org.apache.log4j.Logger
import org.apache.log4j.Level

Logger.getLogger("org").setLevel(Level.OFF)
Logger.getLogger("akka").setLevel(Level.OFF)

之后你就看不到任何日志了。

xdnvmnnf

xdnvmnnf15#

下面是scala用户的代码段:
方案1:
下面是您可以在文件级别添加的代码段

import org.apache.log4j.{Level, Logger}
Logger.getLogger("org").setLevel(Level.WARN)

方案2:
注意:这将适用于所有使用spark会话的应用程序。

import org.apache.spark.sql.SparkSession

  private[this] implicit val spark = SparkSession.builder().master("local[*]").getOrCreate()

spark.sparkContext.setLogLevel("WARN")

方案3:
注意:这个配置应该添加到log4j.properties中(可能类似于/etc/spark/conf/log4j.properties(spark安装在那里)或您的项目文件夹级别log4j.properties,因为您正在模块级别进行更改。这将适用于所有应用程序。

log4j.rootCategory=ERROR, console

imho,选项1是明智的方法,因为它可以在文件级别关闭。

相关问题