将log4j的项目升级到log4j2
0.背景
虽然大部分的java项目使用的都是logback,但是仍然有很多开源项目使用log4j,例如kafka、zookeeper。但是log4j已经于2012年停止更新,并且log4j扫出如下安全漏洞:
CVE-2020-9488
CVE-2019-17571
对于安全性要求高的公司,这种是不允许的,那么问题就是如何在不修改这些开源中间件源码的情况下解决这些CVE漏洞。
从log4j2的官网https://logging.apache.org/log4j/2.x/了解到,log4j和log4j2是存在一定的兼容性的,它们实现的都是SLF4J的API,理论上来说是可以通过替换jar包切换的。
1.log4j的jar包替换为log4j2
以zookeeper和kafka为例,删除lib目录下的这三个jar包:slf4j-log4j12、slf4j-api和log4j
引入log4j2的包:
log4j-1.2-api-2.13.2.jar
log4j-api-2.13.2.jar
log4j-core-2.13.2.jar
log4j-slf4j-impl-2.13.2.jar
slf4j-api-1.7.30.jar
尤其要注意log4j-1.2-api这个包,这个一定要引入,这是为兼容log4j 1.x提供的。本来使用log4j2框架是不需要这个包的,但是kafka中有些地方打印日志并不是用的SLF4J接口org.slf4j.Logger,有些地方直接使用了org.apache.log4j里的类。这就需要通过log4j-1.2-api这个包进行桥接。
这时候kafka运行没问题,zookeeper却报出了类找不到,org.apache.log4j.jmx.HierarchyDynamicMBean,难道就log4j换log4j2就凉在了zk这里?
仔细看zk的源码,这个是jmx开启才需要的,并且是通过类加载器去实例化的,并不是直接import的。
并且zk的jmx是有提供开关可以关闭的,也就是说我们在zk的启动参数里关掉jmx就不会去加载这个类了。将zookeeper.jmx.log4j.disable设置为true即可。
2.修改log4j配置
本来替换完jar包之后kafka和zookeeper进程都已经可以正常运行了,但是长时间运行后发现个很奇怪的现象,经常发现日志文件的最后一行内容只有一半,日志被截断了。。。
Log4j2的官网上有对log4j1.x兼容性的介绍:
除一些常用API的兼容外,还有配置方式也兼容了。不过这个“experimental”形容词就坑了呀。实验性的支持?也就说log4j1.x的配置并不能完全转换为log4j2的,能转到啥程度估计也很难说。
所以干脆直接将配置文件也切换成log4j2的配置方式,打印中断的问题就没有再出现了。例如kafka的log4j.properties修改为如下的配置:
status = error
dest = err
name = PropertiesConfig
property.filename = d:/kafka/log/server.log
property.stateChange.filename = d:/kafka/log/state-change.log
property.request.filename = d:/kafka/log/kafka-request.log
property.cleaner.filename = d:/kafka/log/log-cleaner.log
property.controller.filename = d:/kafka/log/controller.log
property.authorizer.filename = d:/kafka/log/kafka-authorizer.log
property.sasl.filename = d:/kafka/log/kafka-sasl.log
filter.threshold.type = ThresholdFilter
filter.threshold.level = debug
appender.console.type = Console
appender.console.name = STDOUT
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = %m%n
appender.console.filter.threshold.type = ThresholdFilter
appender.console.filter.threshold.level = error
appender.rolling.type = RollingFile
appender.rolling.name = RollingFile
appender.rolling.fileName = ${filename}
appender.rolling.filePattern = d:/kafka/log/server-%d{yyyy-MM-dd}-%i.log.gz
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = %d %p %C{1.} [%t] %m%n
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 30
appender.rolling.policies.time.modulate = true
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size=10MB
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.max = 30
appender.stateChange.type = RollingFile
appender.stateChange.name = stateChangeFile
appender.stateChange.fileName = ${stateChange.filename}
appender.stateChange.filePattern = d:/kafka/log/state-change-%d{yyyy-MM-dd}-%i.log.gz
appender.stateChange.layout.type = PatternLayout
appender.stateChange.layout.pattern = %d %p %C{1.} [%t] %m%n
appender.stateChange.policies.type = Policies
appender.stateChange.policies.time.type = TimeBasedTriggeringPolicy
appender.stateChange.policies.time.interval = 30
appender.stateChange.policies.time.modulate = true
appender.stateChange.policies.size.type = SizeBasedTriggeringPolicy
appender.stateChange.policies.size.size=10MB
appender.stateChange.strategy.type = DefaultRolloverStrategy
appender.stateChange.strategy.max = 10
appender.request.type = RollingFile
appender.request.name = requestRollingFile
appender.request.fileName = ${request.filename}
appender.request.filePattern = d:/kafka/log/kafka-request-%d{yyyy-MM-dd}-%i.log.gz
appender.request.layout.type = PatternLayout
appender.request.layout.pattern = %d %p %C{1.} [%t] %m%n
appender.request.policies.type = Policies
appender.request.policies.time.type = TimeBasedTriggeringPolicy
appender.request.policies.time.interval = 30
appender.request.policies.time.modulate = true
appender.request.policies.size.type = SizeBasedTriggeringPolicy
appender.request.policies.size.size=10MB
appender.request.strategy.type = DefaultRolloverStrategy
appender.request.strategy.max = 10
appender.cleaner.type = RollingFile
appender.cleaner.name = cleanerRollingFile
appender.cleaner.fileName = ${cleaner.filename}
appender.cleaner.filePattern = d:/kafka/log/log-cleaner-%d{yyyy-MM-dd}-%i.log.gz
appender.cleaner.layout.type = PatternLayout
appender.cleaner.layout.pattern = %d %p %C{1.} [%t] %m%n
appender.cleaner.policies.type = Policies
appender.cleaner.policies.time.type = TimeBasedTriggeringPolicy
appender.cleaner.policies.time.interval = 30
appender.cleaner.policies.time.modulate = true
appender.cleaner.policies.size.type = SizeBasedTriggeringPolicy
appender.cleaner.policies.size.size=10MB
appender.cleaner.strategy.type = DefaultRolloverStrategy
appender.cleaner.strategy.max = 10
appender.controller.type = RollingFile
appender.controller.name = controllerRollingFile
appender.controller.fileName = ${controller.filename}
appender.controller.filePattern = d:/kafka/log/controller-%d{yyyy-MM-dd}-%i.log.gz
appender.controller.layout.type = PatternLayout
appender.controller.layout.pattern = %d %p %C{1.} [%t] %m%n
appender.controller.policies.type = Policies
appender.controller.policies.time.type = TimeBasedTriggeringPolicy
appender.controller.policies.time.interval = 30
appender.controller.policies.time.modulate = true
appender.controller.policies.size.type = SizeBasedTriggeringPolicy
appender.controller.policies.size.size=10MB
appender.controller.strategy.type = DefaultRolloverStrategy
appender.controller.strategy.max = 10
appender.authorizer.type = RollingFile
appender.authorizer.name = authorizerRollingFile
appender.authorizer.fileName = ${authorizer.filename}
appender.authorizer.filePattern = d:/kafka/log/kafka-authorizer-%d{yyyy-MM-dd}-%i.log.gz
appender.authorizer.layout.type = PatternLayout
appender.authorizer.layout.pattern = %d %p %C{1.} [%t] %m%n
appender.authorizer.policies.type = Policies
appender.authorizer.policies.time.type = TimeBasedTriggeringPolicy
appender.authorizer.policies.time.interval = 30
appender.authorizer.policies.time.modulate = true
appender.authorizer.policies.size.type = SizeBasedTriggeringPolicy
appender.authorizer.policies.size.size=10MB
appender.authorizer.strategy.type = DefaultRolloverStrategy
appender.authorizer.strategy.max = 5
appender.sasl.type = RollingFile
appender.sasl.name = saslRollingFile
appender.sasl.fileName = ${sasl.filename}
appender.sasl.filePattern = d:/kafka/log/kafka-sasl-%d{yyyy-MM-dd}-%i.log.gz
appender.sasl.layout.type = PatternLayout
appender.sasl.layout.pattern = %d %p %C{1.} [%t] %m%n
appender.sasl.policies.type = Policies
appender.sasl.policies.time.type = TimeBasedTriggeringPolicy
appender.sasl.policies.time.interval = 30
appender.sasl.policies.time.modulate = true
appender.sasl.policies.size.type = SizeBasedTriggeringPolicy
appender.sasl.policies.size.size=10MB
appender.sasl.strategy.type = DefaultRolloverStrategy
appender.sasl.strategy.max = 5
logger.apachekafka.name=org.apache.kafka
logger.apachekafka.level=info
logger.apachekafka.additivity = true
logger.apachekafka.appenderRef.rolling.ref = RollingFile
logger.request.name=kafka.network
logger.request.level=info
logger.request.additivity = true
logger.request.appenderRef.rolling.ref=requestRollingFile
logger.controller.name=kafka.controller
logger.controller.level=TRACE
logger.controller.additivity = true
logger.controller.appenderRef.rolling.ref=controllerRollingFile
logger.cleaner.name=kafka.log.LogCleaner
logger.cleaner.level=info
logger.cleaner.additivity = true
logger.cleaner.appenderRef.rolling.ref=cleanerRollingFile
logger.state.name=state.change.logger
logger.state.level=info
logger.state.additivity = true
logger.state.appenderRef.rolling.ref=stateChangeFile
logger.authorizer.name=kafka.authorizer.logger
logger.authorizer.level=info
logger.authorizer.additivity = true
logger.authorizer.appenderRef.rolling.ref=authorizerRollingFile
logger.kafka.name=kafka
logger.kafka.level=info
logger.kafka.additivity = true
logger.kafka.appenderRef.rolling.ref=RollingFile
rootLogger.level = info
rootLogger.appenderRef.rolling.ref=STDOUT
3.修改启动脚本
log4j和log4j2加载配置文件的方式不一样,所以要修改kafka和zk的启动脚本
kafka修改kafka-server-start.sh中的日志配置文件的配置项
zk修改zkServer.sh,在$QuorumPeerMain前加上
-Dlog4j.configurationFile=${zkpath}/zookeeper/conf/log4j.properties -Dzookeeper.jmx.log4j.disable=true $QuorumPeerMain
更多推荐
所有评论(0)