cloudera / cloudera-scripts-for-log4j Goto Github PK
View Code? Open in Web Editor NEWScripts for addressing log4j zero day security issue
License: Apache License 2.0
Scripts for addressing log4j zero day security issue
License: Apache License 2.0
IMO force-push into the main branch should not be allowed at all, for many reasons:
Also in general would be nice if Cloudera does changes via feature branches , as does the community
Hi Team,
Thanks for great work, this is not working for HDFS file system for non-secure HDFS Cluster. Below is an error:
Running on '/usr/lib'
Backing up files to '/opt/cloudera/log4shell-backup'
Running on '/var/lib'
Backing up files to '/opt/cloudera/log4shell-backup'
Run successful
Found an HDFS namenode on this host, removing JNDI from HDFS tar.gz files
Using /etc/security/keytabs/hdfs.headless.keytab to access HDFS
./hdp_support_scripts/patch_hdfs_tgz.sh: line 43: klist: command not found
Using /etc/security/keytabs/hdfs.headless.keytab to access HDFS
./hdp_support_scripts/patch_hdfs_tgz.sh: line 43: klist: command not found
The code here is not working (cm_cdp_cdh_log4j_jndi_removal.sh line 139)
local backupdir=${2:-/opt/cloudera/log4shell-backup}
mkdir -p "$backupdir/$(dirname $tarfile)"
targetbackup="$backupdir/$tarfile.backup"
if [ ! -f "$targetbackup" ]; then
echo "Backing up to '$targetbackup'"
cp -f "$tarfile" "$targetbackup"
fi
You do a mkdir
for a path and then don't use it.
I can't see any backups for the *.tar.gz in the location of the backupdir
There is something wrong with this block of code
Also the function is always creating a new tar.gz, even if there is nothing to "patch" thus touching files which don't need to be altered - this is bad.....
Hello,
The script (by default) scan /opt/cloudera
for jar/tar/war files. This has the affect of also modifying files which are not part of Cloudera's stack e.g. StreamSets managed by CM. Is there a way the script can check the top-level directories to ensure that they are Cloudera products before scanning?
Thanks.
https://github.com/cloudera/cloudera-scripts-for-log4j/blob/main/cm_cdp_cdh_log4j_jndi_removal.sh#L63
it's missing the -name parameter after -o
line should read
for tarfile in $(find -L $targetdir -name ".tar.gz" -o -name ".tgz"); do
I fixed it locally and the find does not error any more
Hi,
backups go the /tmp hardcoded.
On our systems /tmp is not big enough., so we change it to an other directory.
Question: can you make the “/tmp” backup dir an option or variable?
export TMPDIR=/opt/cloudera/tmp
changes we need to make:
local_path=“/opt/cloudera/tmp/hdfs_tar_files.${current_time}”
hdfs_bc_path=“/opt/cloudera/tmp/backup.${current_time}"
Regards, Hans
Observation from a test run of log4j patch script on HDP:
Looks like if a OS directory has symlinks to jars, its not getting followed and scanned for JndiLookup.class files.
grep: /usr/hdp/current/phoenix-server/lib/hbase-testing-util.jar: No such file or directory
Test run Log:
[root@server]# ./run_log4j_patcher.sh hdp
INFO : Running HDP/HDF patcher script: ./hdp_log4j_jndi_removal.sh '/usr/hdp/current /usr/hdf/current /usr/lib /var/lib' /opt/cloudera/log4shell-backup
INFO : Log file: output_run_log4j_patcher.hjH6lO
.
Removing JNDI from jar files
Running on '/usr/hdp/current'
Backing up files to '/opt/cloudera/log4shell-backup'
grep: /usr/hdp/current/hadoop-client/lib/ojdbc6.jar: No such file or directory
grep: /usr/hdp/current/hbase-client/lib/ojdbc6.jar: No such file or directory
grep: /usr/hdp/current/hbase-master/lib/ojdbc6.jar: No such file or directory
grep: /usr/hdp/current/hbase-regionserver/lib/ojdbc6.jar: No such file or directory
grep: /usr/hdp/current/phoenix-client/lib/hbase-testing-util.jar: No such file or directory
grep: /usr/hdp/current/phoenix-server/lib/hbase-testing-util.jar: No such file or directory
Running on '/usr/hdf/current'
Backing up files to '/opt/cloudera/log4shell-backup'
Running on '/usr/lib'
Backing up files to '/opt/cloudera/log4shell-backup'
Running on '/var/lib'
Backing up files to '/opt/cloudera/log4shell-backup'
Run successful
INFO : Finished
[root@server]#
Symlink:
[user@server]$ ll /usr/hdp/current/phoenix-client/lib/hbase-testing-util.jar
lrwxrwxrwx 1 root root 52 May 22 2018 /usr/hdp/current/phoenix-client/lib/hbase-testing-util.jar -> /usr/hdp/2.6.4.0-91/hbase/lib/hbase-testing-util.jar
Any fix for this? Or am I missing something?
Please create some sort of summary.date.log
file in each CDH/CDP /opt/cloudera/
scanned directory which records all of the original SHA fingerprints of the affected JAR/TAR/WAR files and the SHA fingerprints of their modified versions. This information may be helpful, but most importantly, can be used as a maker to indicate that a particular node has been secured.
./cm_cdp_cdh_log4j_jndi_removal.sh: line 114: unzip: command not found
Issue is in cloudera-scripts-for-log4j/hdp_support_scripts/delete_jndi.sh file, where we are not passing backupdir while patching tar.gz file.
Actual code snippet with the issue:
for tarfile in $(find -L $targetdir -name "*.tar.gz" -o -name "*.tgz"); do if [ -L "$tarfile" ]; then continue fi if zgrep -q JndiLookup.class $tarfile; then $patch_tgz $tarfile fi done
Correct will be:
for tarfile in $(find -L $targetdir -name "*.tar.gz" -o -name "*.tgz"); do if [ -L "$tarfile" ]; then continue fi if zgrep -q JndiLookup.class $tarfile; then $patch_tgz $tarfile $backupdir fi done
PS: Fix of this issue was identified by my fellow colleague Davinder Singh.
Options (cdh and cdp subcommands only):
-t <targetdir> Override target directory (default: distro-specific)
-b <backupdir> Override backup directory (default: /opt/cloudera/log4shell-backup)
I need you help because after apply patches only 2 files from Cloudera Manager can´t updated, so is necessary restores
both files (from backup"/opt/cloudera/log4shell-backup") because cloudera manager dont start again after "applied patches", restore both files then Cloudera manager start without problem.
Background
Versions: CDP Private Cloud Base= 7.1.5
Cloudera Manager version= 7.2.4
Not an issue per se, just a quick sanity check if anyone would like.
[root@host]# find / -type f -name '*.jar' -exec unzip -l {} \; | grep JndiLookup.class
cloudera-scripts-for-log4j/hdp_support_scripts/delete_jndi.sh
Lines 22 to 24 in 7e37650
Please check each directory exists and print an info
message if the directory does not exist.
We run the patch with ansible on all machines.
Added code in patch_hdfs_tgz.sh:
kinit -kt $keytab $principal # under this line.
hdfs haadmin -getAllServiceState | grep active | grep hostname
active_nn=$?
if [ $active_nn -eq 1 ]
exit 0
fi
Hi Team,
Thanks for the great work. The readme said These defaults work for CM/CDH 6 and CDP 7. A different set of directories will be used for HDP.
I'm wondering to ask does this script work on CDH5. Thanks.
Some temporary directories are created but not cleaned up, causing the /tmp
directories to be saturated after a while. Eg.
Does this script mitigate the following CVE?
Those CVE were found after the CVE-2021-44228
was fixed.
The Cloudera script removes the .class
which is vulnerable and the previous CVE mentioned are also mitigated, can anyone confirm this?
If a war file contains the same file twice the script will hang when it asks for a response from the user. This becomes an issue especially when running via Ansible or other automation.
This file has this issue: /opt/cloudera/parcels/CDH-5.16.1-1.cdh5.16.1.p3368.3632/lib/hbase-solr/lib/solr-4.10.3-cdh5.16.1.war
To see the issue, run these commands:
rm -r -f /tmp/unzip_target
mkdir /tmp/unzip_target
unzip -qq /opt/cloudera/parcels/CDH-5.16.1-1.cdh5.16.1.p3368.3632/lib/hbase-solr/lib/solr-4.10.3-cdh5.16.1.war -d /tmp/unzip_target
replace /tmp/unzip_target/WEB-INF/lib/jackson-core-asl-1.8.10.jar? [y]es, [n]o, [A]ll, [N]one, [r]ename:
A proposed fix for this is to add the overwrite option -o
to the unzip command. I can create a PR for this.
There is a number of bit of code like
for jarfile in $targetdir/**/*.jar; do
Thus symlinks are being followed and bad things happen, espically with backup of the link name and not the actual file
consider using
for jarfile in $(find ${targetdir} -type f -name "*.jar") ; do
There is no validation on backup files.
I have a case where the backup path filled up from the script and a number of jar files didn't get backed up, but did get modified.
This means there is no rollback - very bad
if [ ! -f "$targetbackup" ]; then
echo "Backing up to '$targetbackup'"
cp -f "$jarfile" "$targetbackup"
fi
I think it's more safe and proper to actually iterate the files in the JAR files instead of scanning through their binary contents:
if unzip -l $jarfile | grep -q JndiLookup.class; then
Should also be faster than the current implementation because the unzip utility knows how to quickly find the class names within the ZIP file dictionary without having to scan/read the entire file.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.