Git Product home page Git Product logo

elevate's People

Contributors

atoomic avatar bigio avatar cpholloway avatar ggrigon avatar godismyjudge95 avatar prilr avatar sloanebernstein avatar toddr avatar troglodyne avatar xsawyerx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

elevate's Issues

Prevent echoing of service output to systemd journal

#86 removed the tee(1) which had been catching all output. Therefore, the systemd service is now sending all output of the service to the journal, which is redundant and not generally useful.

Consider detecting whether stdout/stderr go to a TTY.

DimeNOC repo

I have a bare metal server in Dime Noc and when I ran pre-check script I got this errors:

  • 28-11:12:37 (2971) [ERROR] 356 package(s) installed from unsupported YUM repo 'base' from /etc/yum.repos.d/DimeNOC.repo
  • 28-11:12:37 (2971) [ERROR] 206 package(s) installed from unsupported YUM repo 'updates' from /etc/yum.repos.d/DimeNOC.repo
  • 28-11:12:37 (2971) [ERROR] 1 package(s) installed from unsupported YUM repo 'extras' from /etc/yum.repos.d/DimeNOC.repo
  • 28-11:12:37 (2971) [ERROR] 22 package(s) installed from unsupported YUM repo 'DimeNOC' from /etc/yum.repos.d/DimeNOC.repo

False blocker for eth devices?

It's not clear why I got this warning doing a check.

$>ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 67.205.177.68  netmask 255.255.240.0  broadcast 67.205.191.255
        inet6 fe80::1425:59ff:fe15:45a8  prefixlen 64  scopeid 0x20<link>
        ether 16:25:59:15:45:a8  txqueuelen 1000  (Ethernet)
        RX packets 6628  bytes 39156672 (37.3 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 5206  bytes 474067 (462.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 381  bytes 58682 (57.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 381  bytes 58682 (57.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
* 25-22:53:30 (2357) [WARN] *** Elevation Blocker detected: ***
Your machine has multiple network interface cards (NICs) using kernel-names (ethX).
Since the upgrade process cannot guarantee their stability after upgrade, you cannot upgrade.

Please provide those interfaces new names before continuing the update.

csf non-functional after elevate

After upgrade, csf appears to be non-functional.

$>perl /usr/local/csf/bin/csftest.pl            
Testing ip_tables/iptable_filter...OK
Testing ipt_LOG...FAILED [FATAL Error: iptables v1.8.4 (nf_tables): Chain 'LOG' does not exist] - Required for csf to function
Testing ipt_multiport/xt_multiport...FAILED [FATAL Error: iptables v1.8.4 (nf_tables): Chain 'LOG' does not exist] - Required for csf to function
Testing ipt_REJECT...OK
Testing ipt_state/xt_state...FAILED [FATAL Error: iptables v1.8.4 (nf_tables): Couldn't load match `state':No such file or directory] - Required for csf to function
Testing ipt_limit/xt_limit...FAILED [FATAL Error: iptables v1.8.4 (nf_tables): Couldn't load match `limit':No such file or directory] - Required for csf to function
Testing ipt_recent...FAILED [Error: iptables v1.8.4 (nf_tables): Couldn't load match `recent':No such file or directory] - Required for PORTFLOOD and PORTKNOCKING features
Testing xt_connlimit...FAILED [Error: iptables v1.8.4 (nf_tables): Couldn't load match `connlimit':No such file or directory] - Required for CONNLIMIT feature
Testing ipt_owner/xt_owner...FAILED [Error: iptables v1.8.4 (nf_tables): Chain 'LOG' does not exist] - Required for SMTP_BLOCK and UID/GID blocking features
Testing iptable_nat/ipt_REDIRECT...OK
Testing iptable_nat/ipt_DNAT...FAILED [Error: iptables v1.8.4 (nf_tables): unknown option "--to-destination"] - Required for csf.redirect feature

RESULT: csf will not function on this server due to FATAL errors from missing modules [4]

Sometimes terminal output reports elevate is canceled while performing a reboot.

I've been seeing this occasionally, and the message is a lot less scary after e628c36, but could still confuse the user.

* 08-19:17:38 (1056) [INFO] ******************************************************************************************
* 08-19:17:38 (1057) [INFO] *
* 08-19:17:38 (1058) [INFO] * Rebooting into stage 3 of 5
* 08-19:17:38 (1059) [INFO] *
* 08-19:17:38 (1060) [INFO] ******************************************************************************************
* 08-19:17:38 (2848) [INFO] Running: /usr/sbin/reboot now

Job for elevate-cpanel.service canceled.

Connection to 10.2.67.228 closed by remote host.
Connection to 10.2.67.228 closed.

Specifically the Job for elevate-cpanel.service canceled. output only happens sometimes. Seems like a race condition.

Restore colorized screen output

#86 removed colorized screen output as a side effect of converting logging to a file using native log4perl mechanisms. Colorized output is useful for users who may not fully understand how to read the existing output, so it is desirable to restore this feature. Doing this for direct invocation is easy enough, so output during --check and stage 1 of --start is trivial to restore, but all other output is not.

Instead of directly tailing the log file, one way of solving this might be to have the script invoked as a service use a socket or named pipe to relay the logging information to the script invoked with --log or otherwise listening to the service.

Confusing output when multiple leapp errors detected.

notice the multiple /scripts/elevate-cpanel --continue section when leap failed as seen below

* 02-19:29:03 (449) [ERROR] The elevation process failed during stage 3.

You can continue the process after fixing the errors by running:

    /usr/local/cpanel/scripts/elevate-cpanel --continue

You can check the error log by running:

    /usr/local/cpanel/scripts/elevate-cpanel

Last Error:

The 'leapp upgrade' process failed.

Please investigate, resolve then re-run the following command to continue the update:

    /scripts/elevate-cpanel --continue


You can read the full leapp report at: /var/log/leapp/leapp-report.txt


* 02-19:29:03 (350) [FATAL] The 'leapp upgrade' process failed.

Please investigate, resolve then re-run the following command to continue the update:

    /scripts/elevate-cpanel --continue


You can read the full leapp report at: /var/log/leapp/leapp-report.txt
The 'leapp upgrade' process failed.

Please investigate, resolve then re-run the following command to continue the update:

    /scripts/elevate-cpanel --continue


You can read the full leapp report at: /var/log/leapp/leapp-report.txt

A YUM/DNF repository defined multiple times

Noticed this issue after updating a VM from MySQL 5.7 to MySQL 8.0

====> * tmp_actor_to_satisfy_sanity_checks
        The actor does NOTHING but satisfy static sanity checks
====> * check_initramfs_tasks
        Inhibit the upgrade if conflicting "initramfs" tasks are detected
==> Processing phase `Reports`
====> * verify_check_results
        Check all dialogs and notify that user needs to make some choices.
====> * verify_check_results
        Check all generated results messages and notify user about them.

============================================================
                     UPGRADE INHIBITED
============================================================

Upgrade has been inhibited due to the following problems:
    1. Inhibitor: A YUM/DNF repository defined multiple times
Consult the pre-upgrade report for details and possible remediation.

============================================================
                     UPGRADE INHIBITED
============================================================


Debug output written to /var/log/leapp/leapp-upgrade.log

============================================================
                           REPORT
============================================================

A report has been generated at /var/log/leapp/leapp-report.json
A report has been generated at /var/log/leapp/leapp-report.txt

============================================================

Extract from the leapp-report

Risk Factor: medium (inhibitor)
Title: A YUM/DNF repository defined multiple times
Summary: The following repositories are defined multiple times inside the "upgrade" container:
    - Mysql-tools-preview
    - Mysql-tools-community
    - Mysql-connectors-community
Remediation: [hint] Remove the duplicate repository definitions or change repoids of conflicting repositories on the system to prevent the conflict.
Key: b8e0ad498e1d4a07ac560407ed814cf00a04b297
[root@elevation-1 ~]# grep -C5 -r Mysql-tools-preview /etc/yum.repos.d/
/etc/yum.repos.d/Mysql57.repo-baseurl=https://repo.mysql.com/yum/mysql-5.7-community/el/7/$basearch/
/etc/yum.repos.d/Mysql57.repo-enabled=0
/etc/yum.repos.d/Mysql57.repo-gpgcheck=1
/etc/yum.repos.d/Mysql57.repo-gpgkey=https://repo.mysql.com/RPM-GPG-KEY-MySQL-2022
/etc/yum.repos.d/Mysql57.repo-       https://repo.mysql.com/RPM-GPG-KEY-mysql
/etc/yum.repos.d/Mysql57.repo:[Mysql-tools-preview]
/etc/yum.repos.d/Mysql57.repo-name=MySQL Tools Preview
/etc/yum.repos.d/Mysql57.repo-baseurl=https://repo.mysql.com/yum/mysql-tools-preview/el/7/$basearch/
/etc/yum.repos.d/Mysql57.repo-enabled=0
/etc/yum.repos.d/Mysql57.repo-gpgcheck=1
/etc/yum.repos.d/Mysql57.repo-gpgkey=https://repo.mysql.com/RPM-GPG-KEY-MySQL-2022
--
/etc/yum.repos.d/Mysql80.repo-baseurl=https://repo.mysql.com/yum/mysql-8.0-community/el/7/$basearch/
/etc/yum.repos.d/Mysql80.repo-enabled=1
/etc/yum.repos.d/Mysql80.repo-gpgcheck=1
/etc/yum.repos.d/Mysql80.repo-gpgkey=https://repo.mysql.com/RPM-GPG-KEY-MySQL-2022
/etc/yum.repos.d/Mysql80.repo-       https://repo.mysql.com/RPM-GPG-KEY-mysql
/etc/yum.repos.d/Mysql80.repo:[Mysql-tools-preview]
/etc/yum.repos.d/Mysql80.repo-name=MySQL Tools Preview
/etc/yum.repos.d/Mysql80.repo-baseurl=https://repo.mysql.com/yum/mysql-tools-preview/el/7/$basearch/
/etc/yum.repos.d/Mysql80.repo-enabled=0
/etc/yum.repos.d/Mysql80.repo-gpgcheck=1
/etc/yum.repos.d/Mysql80.repo-gpgkey=https://repo.mysql.com/RPM-GPG-KEY-MySQL-2022

If ImunifyAV+ is installed on the server, elevate installs Imunify360

This is specific to when ImunifyAV+ has been activated on the server. The free ImunifyAV package and the Imunify360 package are both in the same state post elevate. But if the server has ImunifyAV+, then elevate will install Imunify360 and the plugin page will show an error.

Munin installs lead to broken cpsrvd post Elevate

When I install Munin and then Elevate, I can no longer load WHM (and presumably other cpsrvd pages). I am seeing this error in /u/l/c/logs/error_log:

/usr/local/cpanel/cpsrvd: error while loading shared libraries: libwrap.so.0: cannot open shared object file: No such file or directory

For reference, MySQL 8 is the installed database, if that matters.

Running elevation on successful server produce unclear output

After running elevate-cpanel --start on an already successfully updated system the display output is unclear

We should say that the elevation process is over
and do not display the log as this is the case currently

[root@elevation-1 ~]# /scripts/elevate-cpanel --start
Your elevation update is already in progress (stage 6).
You can monitor the current update by running:

    /scripts/elevate-cpanel

* 14-18:10:30 (272) [INFO] # Monitoring existing upgrade (stage=6) process via: tail -f /var/log/elevate-cpanel.log
Running transaction
  Preparing        :                                                        1/1
  Upgrading        : cloudlinux-linksafe-1-1.24.el8.noarch                  1/6
  Running scriptlet: cloudlinux-linksafe-1-1.24.el8.noarch                  1/6
  Upgrading        : alt-python38-setuptools-wheel-54.1.2-1.el8.noarch      2/6
  Upgrading        : alt-python38-pip-wheel-20.2.4-1.el8.noarch             3/6
  Running scriptlet: cloudlinux-linksafe-1-1.23.el7.noarch                  4/6
  Cleanup          : cloudlinux-linksafe-1-1.23.el7.noarch                  4/6
  Running scriptlet: cloudlinux-linksafe-1-1.23.el7.noarch                  4/6
  Cleanup          : alt-python38-setuptools-wheel-54.1.2-1.el7.noarch      5/6
  Cleanup          : alt-python38-pip-wheel-20.2.4-1.el7.noarch             6/6
  Running scriptlet: cloudlinux-linksafe-1-1.24.el8.noarch                  6/6
  Verifying        : alt-python38-pip-wheel-20.2.4-1.el8.noarch             1/6
  Verifying        : alt-python38-pip-wheel-20.2.4-1.el7.noarch             2/6
  Verifying        : alt-python38-setuptools-wheel-54.1.2-1.el8.noarch      3/6
  Verifying        : alt-python38-setuptools-wheel-54.1.2-1.el7.noarch      4/6
  Verifying        : cloudlinux-linksafe-1-1.24.el8.noarch                  5/6
  Verifying        : cloudlinux-linksafe-1-1.23.el7.noarch                  6/6

Upgraded:
  alt-python38-pip-wheel-20.2.4-1.el8.noarch
  alt-python38-setuptools-wheel-54.1.2-1.el8.noarch
  cloudlinux-linksafe-1-1.24.el8.noarch

Complete!

* 14-16:55:50 (1076) [INFO] ******************************************************************************************
* 14-16:55:50 (1077) [INFO] *
* 14-16:55:50 (1078) [INFO] * Great SUCCESS! Your upgrade to AlmaLinux 8 is complete.
* 14-16:55:50 (1079) [INFO] *
* 14-16:55:50 (1080) [INFO] ******************************************************************************************
* 14-16:55:51 (472) [ERROR] Sending notification: Successfully update to AlmaLinux 8
* 14-16:55:51 (473) [ERROR] The cPanel & WHM server has completed the elevation process from CentOS 7 to AlmaLinux 8.
* 14-16:55:51 (1076) [INFO] ******************************************************************************************
* 14-16:55:51 (1077) [INFO] *
* 14-16:55:51 (1078) [INFO] * Doing final reboot
* 14-16:55:51 (1079) [INFO] *
* 14-16:55:51 (1080) [INFO] ******************************************************************************************
* 14-16:55:51 (2902) [INFO] Running: /usr/sbin/reboot now

ImunifyAV+ deactivated in WHM plugin post elevate.

Server is licensed correctly and displayed ImunifyAV+ prior to elevate. Security Advisor and WHM Marketplace also behave as if it is licensed and installed post elevate, but as shown in this screenshot, the WHM plugin is behaving as if it hasn't been licensed and installed (Reputation Management is provided by ImunifyAV+):

Screen Shot 2022-03-18 at 11 13 53 AM

Rebooting log messages no longer appearing

* 08-19:17:38 (1056) [INFO] ******************************************************************************************
* 08-19:17:38 (1057) [INFO] *
* 08-19:17:38 (1058) [INFO] * Rebooting into stage 3 of 5
* 08-19:17:38 (1059) [INFO] *
* 08-19:17:38 (1060) [INFO] ******************************************************************************************
* 08-19:17:38 (2848) [INFO] Running: /usr/sbin/reboot now

This log message no longer appears in when running elevate, and the user's terminal session ends when it reboots without telling them why that happened.

/elevate-cpanel --check should abort earlier when run on a C8 like server

When detecting a major version >=8 we should abort earlier and make the error fatal, not a warning

╰─> cat /etc/redhat-release
AlmaLinux release 8.6 (Sky Tiger)
╰─> /scripts/elevate-cpanel --check
info [elevate-cpanel] Successfully verified signature for cpanel (key types: release, development).
* 17-17:14:05 (2465) [WARN] *** Elevation Blocker detected: ***
This installation of cPanel (11.105.9999.77) does not appear to be up to date. Please upgrade cPanel to a most recent version.

* 17-17:14:05 (2465) [WARN] *** Elevation Blocker detected: ***
This script is only designed to upgrade CentOS 7 to AlmaLinux 8

* 17-17:14:05 (2465) [WARN] *** Elevation Blocker detected: ***
You need to run CentOS 7.9 and later to upgrade AlmaLinux 8. You are currently using AlmaLinux v8.6.0

* 17-17:14:07 (2971) [ERROR] 509 package(s) installed from unsupported YUM repo 'baseos' from /etc/yum.repos.d/almalinux.repo
* 17-17:14:07 (2971) [ERROR] 331 package(s) installed from unsupported YUM repo 'appstream' from /etc/yum.repos.d/almalinux.repo
* 17-17:14:07 (2981) [WARN] Unsupported YUM repo enabled 'extras' without packages installed from /etc/yum.repos.d/almalinux.repo
* 17-17:14:07 (2981) [WARN] Unsupported YUM repo enabled 'epel-modular' without packages installed from /etc/yum.repos.d/epel-modular.repo
* 17-17:14:07 (2465) [WARN] *** Elevation Blocker detected: ***
One or more enabled YUM repo are currently unsupported.
You should disable these repositories and remove packages installed from them
before continuing the update.

Consider reporting this limitation to https://github.com/cpanel/elevate/issues


* 17-17:14:14 (3042) [ERROR] Could not read directory '/var/lib/yum': No such file or directory
* 17-17:14:14 (2465) [WARN] *** Elevation Blocker detected: ***
yum is not stable

* 17-17:14:15 (2879) [INFO] Checking if your system is up to date:
* 17-17:14:15 (3300) [INFO] Running: /usr/bin/yum clean all
* 17-17:14:15 (3301) [INFO]
* 17-17:14:15 (3311) [INFO] 113 files removed
* 17-17:14:15 (3321) [INFO]
* 17-17:14:15 (3300) [INFO] Running: /usr/bin/yum check-update
* 17-17:14:15 (3301) [INFO]

EA4 blocker isn't working.

After the patch on the 5th, the EA4 blocker is no longer blocking elevate at all. If I check my current EA4 config, I get an output of all the incompatible packages that are currently installed:

[root@10-2-64-21 ~]# /usr/local/bin/ea_current_to_profile --target-os=AlmaLinux_8 | head -20
The following packages are not available on AlmaLinux_8 and have been removed from the profile
    ea-php54
    ea-php54-libc-client
    ea-php54-pear
    ea-php54-php-bcmath
    ea-php54-php-calendar
    ea-php54-php-cli
    ea-php54-php-common
    ea-php54-php-curl
    ea-php54-php-devel
    ea-php54-php-fpm
    ea-php54-php-ftp
    ea-php54-php-gd
    ea-php54-php-iconv
    ea-php54-php-imap
    ea-php54-php-litespeed
    ea-php54-php-mbstring
    ea-php54-php-mysqlnd
    ea-php54-php-pdo
    ea-php54-php-posix

But when I try to run elevate, I do not get any warnings about these packages and elevate is not blocked:

* 06-15:14:49 (2753) [INFO] Checking EasyApache profile compatibility with Almalinux 8.
* 06-15:14:49 (1414) [INFO] Running: /usr/local/bin/ea_current_to_profile --target-os=AlmaLinux_8
* 06-15:14:50 (1435) [INFO] Backed up EA4 profile to /etc/cpanel/ea4/profiles/custom/current_state_at_2022-04-06_15:14:50_modified_for_AlmaLinux_8.json
* 06-15:14:50 (1172) [INFO] ******************************************************************************************
* 06-15:14:50 (1173) [INFO] *
* 06-15:14:50 (1174) [INFO] * /!\ Warning: You are about to convert your cPanel & WHM CentOS 7 to Almalinux 8 server.
* 06-15:14:50 (1175) [INFO] *
* 06-15:14:50 (1176) [INFO] ******************************************************************************************

Terminal output still sometimes reporting elevate is canceled.

The issue reported in #46 and addressed in #67 still is happening:

* 29-18:40:42 (1130) [INFO] ******************************************************************************************
* 29-18:40:42 (1131) [INFO] *
* 29-18:40:42 (1132) [INFO] * Starting stage 1 of 5: Installing elevate-cpanel.service service
* 29-18:40:42 (1133) [INFO] *
* 29-18:40:42 (1134) [INFO] ******************************************************************************************
* 29-18:40:42 (2981) [INFO] Installing service elevate-cpanel.service which will upgrade the server to AlmaLinux 8
* 29-18:40:42 (3099) [INFO] Running: /usr/bin/systemctl daemon-reload


* 29-18:40:43 (3099) [INFO] Running: /usr/bin/systemctl enable elevate-cpanel.service

Created symlink from /etc/systemd/system/multi-user.target.wants/elevate-cpanel.service to /etc/systemd/system/elevate-cpanel.service.

* 29-18:40:43 (3014) [INFO] Starting service elevate-cpanel.service
* 29-18:40:43 (280) [INFO] # Monitoring existing upgrade (stage=2) process via: tail -f /var/log/elevate-cpanel.log
* 29-18:40:43 (3099) [INFO] Running: /usr/bin/systemctl start elevate-cpanel.service

Job for elevate-cpanel.service canceled.
Connection to 10.2.70.15 closed by remote host.
Connection to 10.2.70.15 closed.

What is noticeably different too is that it is no longer reporting that it is booting into stage 3 anymore. It just starts the reboot, sometimes after reporting that the elevate-cpanel.service job is canceled.

ea conversion

need to use some args

ea_current_to_profile look like this: ea_current_to_profile --target-os=AlmaLinux_8

The nixstatsagent service can't start post elevate

Following this change, e29ea2b, the nixstatsagent is seemingly installed after elevate. However it isn't able to start:

# systemctl status nixstatsagent
● nixstatsagent.service - Nixstatsagent
   Loaded: loaded (/etc/systemd/system/nixstatsagent.service; enabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Tue 2022-03-08 17:30:17 UTC; 3s ago
  Process: 4750 ExecStart=/usr/local/bin/nixstatsagent (code=exited, status=203/EXEC)
 Main PID: 4750 (code=exited, status=203/EXEC)

Mar 08 17:30:17 10-2-66-6.cprapid.com systemd[1]: Started Nixstatsagent.
Mar 08 17:30:17 10-2-66-6.cprapid.com systemd[1]: nixstatsagent.service: Main process exited, code=exited, status=203/EXEC
Mar 08 17:30:17 10-2-66-6.cprapid.com systemd[1]: nixstatsagent.service: Failed with result 'exit-code'.

BLOCKER: eth1..9 means block with message

Unsupported network configuration error from leap

We could expect this being an issue for customers using multiple interfaces:
view https://access.redhat.com/solutions/4067471 for more information.

* 02-19:29:03 (449) [ERROR] The elevation process failed during stage 3.

You can continue the process after fixing the errors by running:

    /usr/local/cpanel/scripts/elevate-cpanel --continue

You can check the error log by running:

    /usr/local/cpanel/scripts/elevate-cpanel

Last Error:

The 'leapp upgrade' process failed.

Please investigate, resolve then re-run the following command to continue the update:

    /scripts/elevate-cpanel --continue


You can read the full leapp report at: /var/log/leapp/leapp-report.txt


* 02-19:29:03 (350) [FATAL] The 'leapp upgrade' process failed.

Please investigate, resolve then re-run the following command to continue the update:

    /scripts/elevate-cpanel --continue


You can read the full leapp report at: /var/log/leapp/leapp-report.txt
The 'leapp upgrade' process failed.

Please investigate, resolve then re-run the following command to continue the update:

    /scripts/elevate-cpanel --continue


You can read the full leapp report at: /var/log/leapp/leapp-report.txt
Risk Factor: high (inhibitor)
Title: Unsupported network configuration
Summary: Detected multiple physical network interfaces where one or more use kernel naming (e.g. eth0). Upgrade process can not continue because stability of names can not be guaranteed. Please read the article at https://access.redhat.com/solutions/4067471 for more information.
Remediation: [hint] Rename all ethX network interfaces following the attached KB solution article.
Key: d3050d265759a79ce895e64f45e9c56e49b3a953
----------------------------------------
root@centos-s-2vcpu-2gb-sfo3-01 [~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 137.184.224.215  netmask 255.255.240.0  broadcast 137.184.239.255
        inet6 fe80::2414:5fff:fef6:80e6  prefixlen 64  scopeid 0x20<link>
        ether 26:14:5f:f6:80:e6  txqueuelen 1000  (Ethernet)
        RX packets 15732  bytes 63160762 (60.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 11219  bytes 986161 (963.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.124.0.2  netmask 255.255.240.0  broadcast 10.124.15.255
        inet6 fe80::447d:8bff:fe3c:39b4  prefixlen 64  scopeid 0x20<link>
        ether 46:7d:8b:3c:39:b4  txqueuelen 1000  (Ethernet)
        RX packets 2  bytes 140 (140.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 13  bytes 838 (838.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 2050  bytes 214419 (209.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2050  bytes 214419 (209.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Support for digital ocean droplet agent

Consider adding support for the droplet-agent repo.

[email protected] [/usr/local/lsws/conf]# /scripts/elevate-cpanel --start
* 02-19:24:39 (2445) [ERROR] 1 package(s) installed from unsupported YUM repo 'droplet-agent' from /etc/yum.repos.d/droplet-agent.repo
* 02-19:24:39 (2445) [ERROR] 1 package(s) installed from unsupported YUM repo 'droplet-agent' from /etc/yum.repos.d/droplet-agent.repo
* 02-19:24:39 (2143) [ERROR] One or more enabled YUM repo are currently unsupported.
You should disable these repositories and remove packages installed from them
before continuing the update.

Consider reporting this limitation to https://github.com/cpanel/elevate/issues

[email protected] [/usr/local/lsws/conf]# yum list installed | grep droplet
droplet-agent.x86_64                           1.2.0-1               @droplet-agent

[email protected] [/usr/local/lsws/conf]# cat /etc/yum.repos.d/droplet-agent.repo
[droplet-agent]
name=DigitalOcean Droplet Agent
baseurl=https://repos-droplet.digitalocean.com/yum/droplet-agent/$basearch
repo_gpgcheck=0
gpgcheck=1
enabled=1
gpgkey=https://repos-droplet.digitalocean.com/gpg.key
sslverify=0
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
metadata_expire=300

Stage 4 sometimes reporting start_background_mysql_upgrade failure when command returns successfully

The following failure is sometimes observed during stage 4:

=> Log opened from cPanel Update (upcp) - Slave (32322) at Mon Apr  4 19:48:27 2022
[2022-04-04 19:48:27 +0000]   Pre Maintenance completed successfully
[2022-04-04 19:48:27 +0000]   95% complete
[2022-04-04 19:48:27 +0000]   Running Standardized hooks
[2022-04-04 19:48:27 +0000]   100% complete
[2022-04-04 19:48:27 +0000]
[2022-04-04 19:48:27 +0000] 	cPanel update completed
[2022-04-04 19:48:27 +0000]   A log of this update is available at /var/cpanel/updatelogs/update.32322.12140.1649101515.log
[2022-04-04 19:48:27 +0000]   Removing upcp pidfile
[2022-04-04 19:48:27 +0000]
[2022-04-04 19:48:27 +0000] Completed all updates
=> Log closed Mon Apr  4 19:48:27 2022

* 04-19:48:28 (3115) [INFO] Running: /usr/bin/rm -f /usr/local/cpanel/cpsanitycheck.so


* 04-19:48:28 (3115) [INFO] Running: /usr/local/cpanel/cpkeyclt

Updating cPanel license...Done. Update succeeded.

* 04-19:48:29 (1621) [INFO] Removing leapp from excludes in /etc/yum.conf
* 04-19:48:29 (1506) [INFO] Restoring MySQL 8.0
* 04-19:48:39 (1521) [INFO] Restoring MySQL via upgrade_id mysql_upgrade.20220404-194839
* 04-19:48:39 (1522) [INFO] Waiting for MySQL installation
..........
..
* 04-19:49:41 (1552) [FATAL] Failed to restore MySQL 8.0: upgrade mysql_upgrade.20220404-194839 status 'failed'
* 04-19:49:41 (1553) [FATAL] ---
data:
  upgrade_id: mysql_upgrade.20220404-194839
metadata:
  command: start_background_mysql_upgrade
  reason: OK
  result: 1
  version: 1
* 04-19:49:42 (489) [ERROR] Sending notification: Fail to update to AlmaLinux 8
* 04-19:49:42 (490) [ERROR] The elevation process failed during stage 4.

You can continue the process after fixing the errors by running:

    /usr/local/cpanel/scripts/elevate-cpanel --continue

You can check the error log by running:

    /usr/local/cpanel/scripts/elevate-cpanel

Last Error:

Failed to restore MySQL at /usr/local/cpanel/scripts/elevate-cpanel line 1554.


* 04-19:49:42 (391) [FATAL] Failed to restore MySQL at /usr/local/cpanel/scripts/elevate-cpanel line 1554.
Failed to restore MySQL at /usr/local/cpanel/scripts/elevate-cpanel line 1554.

^C

The WHM API call appears to be successful, but the script is failing to see it as such. Restarting with --continue usually succeeds without issue.

GRUB entries unable to rebuild bootable entries during and following upgrade

Had an interesting situation, pretty much successfully moved through the migration except for a problem with Python and another strangely unknown issue that was kind of resolved after a reboot, but I had to manually enter the GRUB boot args. I've documented the issue below and wanted to report it here as well for your awareness and assistance to resolve:
https://almalinux.discourse.group/t/how-to-repair-rebuild-grub-following-a-cross-upgrade-from-centos-7-to-almalinux-8/1268

reinstall_mysql_packages Function Fails on DNSOnly System

The function reinstall_mysql_packages has failed on our DNSOnly system.

* 11-19:52:53 (1416) [INFO] Restoring MySQL 10.6
* 11-19:52:54 (448) [ERROR] Sending notification: Fail to update to AlmaLinux 8
* 11-19:52:54 (449) [ERROR] The elevation process failed during stage 4.

You can continue the process after fixing the errors by running:

    /usr/local/cpanel/scripts/elevate-cpanel --continue

You can check the error log by running:

    /usr/local/cpanel/scripts/elevate-cpanel

Last Error:

Cannot find upgrade_id from start_background_mysql_upgrade:
---
status: 0
statusmsg: "Permission Denied: start_background_mysql_upgrade"


Use of uninitialized value in subroutine entry at /usr/local/cpanel/3rdparty/perl/532/lib/perl5/532/HTTP/Tiny.pm line 992.
* 11-19:52:55 (350) [FATAL] Cannot find upgrade_id from start_background_mysql_upgrade:
---
status: 0
statusmsg: "Permission Denied: start_background_mysql_upgrade"
Cannot find upgrade_id from start_background_mysql_upgrade:
---
status: 0
statusmsg: "Permission Denied: start_background_mysql_upgrade"

Attempting to run the WHMAPI call start_background_mysql_upgrade manually fails as well, same permission denied message:

# /usr/local/cpanel/bin/whmapi1 start_background_mysql_upgrade
---
status: 0
statusmsg: "Permission Denied: start_background_mysql_upgrade"

or:

# /usr/local/cpanel/bin/whmapi1 start_background_mysql_upgrade version=10.6
---
status: 0
statusmsg: "Permission Denied: start_background_mysql_upgrade"

I believe this may be due to this being a DNSOnly server, and this API call not working as it normally would on a standard cPanel & WHM system.

I was able to get around this by commenting out line 1565 and then running /usr/local/cpanel/scripts/elevate-cpanel --continue.

Add blocker to enforce last WHM version

Before attempting an elevation update, we need to enforce that a server is using the last available 102 or 104 release.
We should check the TIERS.json
Such a rule should not apply to development builds. ( we can consider special hardcoded number for 9999 builds )

Implement machine-readable version of the --check flag

Story: As a cPanel developer, I want access to the results of elevate-cpanel --check without needing to do any ad-hoc parsing of human-readable log output, so that I can display the current ELevate blockers to the server administrator in the WHM interface.

Send all output through log4perl?

Currently, most logging to file is done by piping the output of a command through tee(1), which is only invoked through the systemd service. This means that some output is not captured in the log file, such as the entirety of stage 1. It also makes logging of some events (e.g., use of --continue in #71) awkward and ad-hoc.

Since much of the internal diagnostic output is being routed through Log::Log4perl, it would be ideal to also route external program output through it and use a native appender to handle logging to file.

Issues:

  • ssystem needs to call something that lets it capture output (i.e., not system).
  • There doesn't seem to be an easy way to colorize output to a file via log4perl: ScreenColoredLevels won't write to a file, and File can't do colors.

sprite_generator issue during installation

noticed during stage4

we are probably restoring plugins too early... also note that when this is happening MySQL is not restored yet

Can't load '/usr/local/cpanel/3rdparty/perl/532/lib/perl5/cpanel_lib/x86_64-linux-64int/auto/GD/GD.so' for module GD: libpng15.so.15: cannot open shared object file: No such file or directory at /usr/local/cpanel/3rdparty/perl/532/lib/perl5/532/XSLoader.pm line 93.
 at /usr/local/cpanel/3rdparty/perl/532/lib/perl5/cpanel_lib/x86_64-linux-64int/GD.pm line 91.
Compilation failed in require at /usr/local/cpanel/3rdparty/perl/532/lib/perl5/cpanel_lib/CSS/SpriteMaker.pm line 7.
BEGIN failed--compilation aborted at /usr/local/cpanel/3rdparty/perl/532/lib/perl5/cpanel_lib/CSS/SpriteMaker.pm line 7.
Compilation failed in require at /usr/local/cpanel/bin/sprite_generator line 12.
BEGIN failed--compilation aborted at /usr/local/cpanel/bin/sprite_generator line 12.
Id: TQ:TaskQueue:57
* 25-17:22:27 (1532) [INFO] Restoring cPanel yum-based-plugins
* 25-17:22:27 (2673) [INFO] Running: /usr/bin/dnf -y reinstall cpanel-analytics cpanel-monitoring-agent cpanel-monitoring-cpanel-plugin cpanel-monitoring-whm-plugin

Last metadata expiration check: 0:01:38 ago on Fri 25 Feb 2022 05:20:50 PM UTC.
Dependencies resolved.
================================================================================
 Package                       Arch   Version              Repository      Size
================================================================================
Reinstalling:
 cpanel-analytics              noarch 1.4.10-1.4.1.cpanel  cpanel-plugins  75 k
 cpanel-monitoring-agent       noarch 1.0.0-34.2.cpanel    cpanel-plugins  17 k
 cpanel-monitoring-cpanel-plugin
                               noarch 1.0.2-22.24.1.cpanel cpanel-plugins 125 k
 cpanel-monitoring-whm-plugin  noarch 1.0.2-9.13.1.cpanel  cpanel-plugins  30 k

Transaction Summary
================================================================================

Total download size: 247 k
Installed size: 1.6 M
Downloading Packages:
(1/4): cpanel-monitoring-agent-1.0.0-34.2.cpane 443 kB/s |  17 kB     00:00
(2/4): cpanel-analytics-1.4.10-1.4.1.cpanel.noa 1.5 MB/s |  75 kB     00:00
(3/4): cpanel-monitoring-cpanel-plugin-1.0.2-22 2.3 MB/s | 125 kB     00:00
(4/4): cpanel-monitoring-whm-plugin-1.0.2-9.13. 926 kB/s |  30 kB     00:00
--------------------------------------------------------------------------------
Total                                           3.1 MB/s | 247 kB     00:00
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                        1/1
  Running scriptlet: cpanel-monitoring-agent-1.0.0-34.2.cpanel.noarch       1/8
  Reinstalling     : cpanel-monitoring-agent-1.0.0-34.2.cpanel.noarch       1/8
  Running scriptlet: cpanel-monitoring-agent-1.0.0-34.2.cpanel.noarch       1/8
  Running scriptlet: cpanel-monitoring-whm-plugin-1.0.2-9.13.1.cpanel.noa   2/8
  Reinstalling     : cpanel-monitoring-whm-plugin-1.0.2-9.13.1.cpanel.noa   2/8
  Running scriptlet: cpanel-monitoring-whm-plugin-1.0.2-9.13.1.cpanel.noa   2/8
  Running scriptlet: cpanel-monitoring-cpanel-plugin-1.0.2-22.24.1.cpanel   3/8
Plugin uninstalled ok
Plugin uninstalled ok

  Reinstalling     : cpanel-monitoring-cpanel-plugin-1.0.2-22.24.1.cpanel   3/8
  Running scriptlet: cpanel-monitoring-cpanel-plugin-1.0.2-22.24.1.cpanel   3/8
Plugin installed ok
Plugin installed ok
Can't load '/usr/local/cpanel/3rdparty/perl/532/lib/perl5/cpanel_lib/x86_64-linux-64int/auto/GD/GD.so' for module GD: libpng15.so.15: cannot open shared object file: No such file or directory at /usr/local/cpanel/3rdparty/perl/532/lib/perl5/532/XSLoader.pm line 93.
 at /usr/local/cpanel/3rdparty/perl/532/lib/perl5/cpanel_lib/x86_64-linux-64int/GD.pm line 91.
Compilation failed in require at /usr/local/cpanel/3rdparty/perl/532/lib/perl5/cpanel_lib/CSS/SpriteMaker.pm line 7.
BEGIN failed--compilation aborted at /usr/local/cpanel/3rdparty/perl/532/lib/perl5/cpanel_lib/CSS/SpriteMaker.pm line 7.
Compilation failed in require at /usr/local/cpanel/bin/sprite_generator line 12.
BEGIN failed--compilation aborted at /usr/local/cpanel/bin/sprite_generator line 12.
Id: TQ:TaskQueue:57

  Running scriptlet: cpanel-analytics-1.4.10-1.4.1.cpanel.noarch            4/8
  Reinstalling     : cpanel-analytics-1.4.10-1.4.1.cpanel.noarch            4/8
  Running scriptlet: cpanel-analytics-1.4.10-1.4.1.cpanel.noarch            4/8
  Running scriptlet: cpanel-monitoring-cpanel-plugin-1.0.2-22.24.1.cpanel   5/8
  Cleanup          : cpanel-monitoring-cpanel-plugin-1.0.2-22.24.1.cpanel   5/8
  Running scriptlet: cpanel-monitoring-cpanel-plugin-1.0.2-22.24.1.cpanel   5/8
  Running scriptlet: cpanel-monitoring-whm-plugin-1.0.2-9.13.1.cpanel.noa   6/8
  Cleanup          : cpanel-monitoring-whm-plugin-1.0.2-9.13.1.cpanel.noa   6/8
  Running scriptlet: cpanel-monitoring-whm-plugin-1.0.2-9.13.1.cpanel.noa   6/8
  Running scriptlet: cpanel-monitoring-agent-1.0.0-34.2.cpanel.noarch       7/8
  Cleanup          : cpanel-monitoring-agent-1.0.0-34.2.cpanel.noarch       7/8
  Running scriptlet: cpanel-monitoring-agent-1.0.0-34.2.cpanel.noarch       7/8
  Running scriptlet: cpanel-analytics-1.4.10-1.4.1.cpanel.noarch            8/8
  Cleanup          : cpanel-analytics-1.4.10-1.4.1.cpanel.noarch            8/8
  Running scriptlet: cpanel-analytics-1.4.10-1.4.1.cpanel.noarch            8/8
  Verifying        : cpanel-analytics-1.4.10-1.4.1.cpanel.noarch            1/8
  Verifying        : cpanel-analytics-1.4.10-1.4.1.cpanel.noarch            2/8
  Verifying        : cpanel-monitoring-agent-1.0.0-34.2.cpanel.noarch       3/8
  Verifying        : cpanel-monitoring-agent-1.0.0-34.2.cpanel.noarch       4/8
  Verifying        : cpanel-monitoring-cpanel-plugin-1.0.2-22.24.1.cpanel   5/8
  Verifying        : cpanel-monitoring-cpanel-plugin-1.0.2-22.24.1.cpanel   6/8
  Verifying        : cpanel-monitoring-whm-plugin-1.0.2-9.13.1.cpanel.noa   7/8
  Verifying        : cpanel-monitoring-whm-plugin-1.0.2-9.13.1.cpanel.noa   8/8

Reinstalled:
  cpanel-analytics-1.4.10-1.4.1.cpanel.noarch
  cpanel-monitoring-agent-1.0.0-34.2.cpanel.noarch
  cpanel-monitoring-cpanel-plugin-1.0.2-22.24.1.cpanel.noarch
  cpanel-monitoring-whm-plugin-1.0.2-9.13.1.cpanel.noarch

Complete!

* 25-17:22:32 (1374) [INFO] Restoring MySQL 10.5

Corrective action needed for cPanel IMAP: Dovecot fails to start with "key too small" - mkcert use too short bit length

Following an upgrade from CentOS 7 to AlmaLinux 8 it appears that this bug is encountered with cPanel:
https://bugzilla.redhat.com/show_bug.cgi?id=1882939

If things are not corrected, users will see something like the following:

Jun 19 10:12:04 linode dovecot[xxx]: imap-login: Error: Failed to initialize SSL server context: Can't load DH parameters (ssl_dh setting): error:1408518A:SSL routines:ssl3_ctx_ctrl:dh key too small: user=<>, rip=xxx.xxx.xxx.xxx, lip=xxx.xxx.xxx.xxx, session=<xxxxxxxxx>

Reinstall Nixstats Monitoring Daemon

Systems with the Nixstats monitoring agent installed will need to reinstall it during the elevation process (likely towards the end). Presently when you attempt to start the nixstatsagent you will see output similar too:

Feb 10 11:05:21 web3 systemd[1]: Started Nixstatsagent.
Feb 10 11:05:21 web3 systemd[1926350]: nixstatsagent.service: Failed to execute command: No such file or directory
Feb 10 11:05:21 web3 systemd[1926350]: nixstatsagent.service: Failed at step EXEC spawning /usr/bin/nixstatsagent: No such file or directory
Feb 10 11:05:21 web3 systemd[1]: nixstatsagent.service: Main process exited, code=exited, status=203/EXEC
Feb 10 11:05:21 web3 systemd[1]: nixstatsagent.service: Failed with result 'exit-code'

Initially I thought the "No such file or directory" message was referring to /usr/bin/nixstatsagent however this file exists. What is actually being referred to is Python. The agent is trying to access Python at /usr/bin/python which no longer exists. The binaries for Python are provided with version numbers (e.g., /usr/bin/python2 or /usr/bin/python3).

I have spoken with Vincent at Nixstats and he confirms the monitoring agent does work fine on AlmaLinux 8.x but that it needs to be reinstalled.

Uninstall - https://help.nixstats.com/en/article/uninstall-the-monitoring-agent-1iygnn3/
Install - https://help.nixstats.com/en/article/how-do-i-monitor-a-server-ql7087/

We're going to have to fetch the users token from the /etc/nixstats-token.ini file and use it to install the monitoring agent. We may not want to run:

rm -f /etc/nixstats.ini
rm -f /etc/nixstats-token.ini

which is noted in the uninstall doc since we will be needing those files (at least the token one).

Determine if we leave "{stdout}" and "{stderr}" in log?

The new changes to the output log for now has "{stdout}" (and occasionally "{stderr}") appearing on just about every line:

* 12-17:24:58 (3306) [INFO] {stdout} [2022-04-12 17:24:58 +0000]       Committing cpanel.
* 12-17:24:58 (3306) [INFO] {stdout} [2022-04-12 17:24:58 +0000]   All directories created and updated
* 12-17:24:58 (3306) [INFO] {stdout} [2022-04-12 17:24:58 +0000]   Commiting all downloaded files for binaries/linux-c8-x86_64, cpanel
* 12-17:24:58 (3306) [INFO] {stdout} [2022-04-12 17:24:58 +0000]   Checking permissions of all files we manage
* 12-17:24:58 (3306) [INFO] {stdout} [2022-04-12 17:24:58 +0000]       Updating cpanel license for new binaries. This call may fail and that is ok.
* 12-17:24:59 (3306) [INFO] {stdout} [2022-04-12 17:24:59 +0000]       Committing cPanel themes.
* 12-17:24:59 (3306) [INFO] {stdout} [2022-04-12 17:24:59 +0000]   All directories created and updated
* 12-17:24:59 (3306) [INFO] {stdout} [2022-04-12 17:24:59 +0000]   Commiting all downloaded files for paper_lantern
* 12-17:24:59 (3306) [INFO] {stdout} [2022-04-12 17:24:59 +0000]   Checking permissions of all files we manage
* 12-17:24:59 (3306) [INFO] {stdout} [2022-04-12 17:24:59 +0000]   All directories created and updated
* 12-17:24:59 (3306) [INFO] {stdout} [2022-04-12 17:24:59 +0000]   Commiting all downloaded files for jupiter
* 12-17:24:59 (3306) [INFO] {stdout} [2022-04-12 17:24:59 +0000]   Checking permissions of all files we manage
* 12-17:24:59 (3306) [INFO] {stdout} [2022-04-12 17:24:59 +0000]       Updating / Removing packages.
* 12-17:24:59 (3306) [INFO] {stdout} [2022-04-12 17:24:59 +0000]   No packages need to be uninstalled
* 12-17:24:59 (3306) [INFO] {stdout} [2022-04-12 17:24:59 +0000]       Restoring service monitoring.

Based on the change this behavior isn't unexpected, but it is a change to how the log was appearing and is a deviation from similar logging in cPanel.

The ea-nginx RPM doesn't seem to be reinstalling properly

Post elevate the nginx service is unable to start. When I try to restart it in WHM >> Nginx Manager, I am getting this error message:

Error: [2022-02-21 22:05:58 +0000] info [restartsrv_nginx] systemd failed to start the service “nginx” (The “/usr/bin/systemctl restart nginx.service --no-ask-password” command (process 5308) reported error number 1 when it ended.): Job for nginx.service failed because the control process exited with error code. See "systemctl status nginx.service" and "journalctl -xe" for details. Waiting for “nginx” to start ………failed. Cpanel::Exception::Services::StartError Service Status undefined status from Cpanel::ServiceManager::Services::Nginx Service Error (XID jbk5ck) The “nginx” service failed to start. Startup Log Feb 21 22:05:57 10-2-70-129.cprapid.com nginx[5309]: nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use) Feb 21 22:05:57 10-2-70-129.cprapid.com nginx[5309]: nginx: [emerg] bind() to [::]:443 failed (98: Address already in use) Feb 21 22:05:58 10-2-70-129.cprapid.com nginx[5309]: nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) Feb 21 22:05:58 10-2-70-129.cprapid.com nginx[5309]: nginx: [emerg] bind() to [::]:80 failed (98: Address already in use) Feb 21 22:05:58 10-2-70-129.cprapid.com nginx[5309]: nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use) Feb 21 22:05:58 10-2-70-129.cprapid.com nginx[5309]: nginx: [emerg] bind() to [::]:443 failed (98: Address already in use) Feb 21 22:05:58 10-2-70-129.cprapid.com nginx[5309]: nginx: [emerg] still could not bind() Feb 21 22:05:58 10-2-70-129.cprapid.com systemd[1]: nginx.service: Control process exited, code=exited status=1 Feb 21 22:05:58 10-2-70-129.cprapid.com systemd[1]: nginx.service: Failed with result 'exit-code'. Feb 21 22:05:58 10-2-70-129.cprapid.com systemd[1]: Failed to start nginx - high performance web server. Log Messages Feb 21 22:05:58 10-2-70-129 nginx[5309]: nginx: [emerg] bind() to [::]:443 failed (98: Address already in use) Feb 21 22:05:58 10-2-70-129 nginx[5309]: nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use) Feb 21 22:05:58 10-2-70-129 nginx[5309]: nginx: [emerg] bind() to [::]:80 failed (98: Address already in use) Feb 21 22:05:58 10-2-70-129 nginx[5309]: nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) Feb 21 22:05:57 10-2-70-129 nginx[5309]: nginx: [emerg] bind() to [::]:443 failed (98: Address already in use) nginx has failed. Contact your system administrator if the service does not automagically recover.```

If I uninstall and reinstall ea-nginx then this issue goes away.

Normalize logging levels and behaviors across the program

Now that colorized logging has been restored by #96, it is becoming apparent that better decisions need to be made about which logging levels certain messages should use. The primary example is that most external programs are having their output printed at the INFO level, which is colored green. Before recent changes, ssystem used system directly, so Log4perl was bypassed, and such text always rendered with the default terminal color; only the header stating what program was being run was colored. Now, large portions of the output are green, leading to fatigue when reading the output.

Additionally, die statements are often being used both for fatal error reporting and for exception handling. However, now that Log4perl handles all log-worthy output, many—but not all—of these statements should be replaced with LOGDIE, so that Log4perl does the appropriate thing with them.

check_valid_server_hostname --notify error

Noticed this error during an elevation process when using a fresh server with hostname elevateitornot-3.openstack.build

[2022-03-02 00:38:22 +0000]    - Processing command `/usr/local/cpanel/scripts/check_valid_server_hostname --notify`
[2022-03-02 00:38:23 +0000] E    [/usr/local/cpanel/scripts/check_valid_server_hostname] ERROR: WHM has detected a manual hostname change.
[2022-03-02 00:38:23 +0000] E    [/usr/local/cpanel/scripts/check_valid_server_hostname]
[2022-03-02 00:38:23 +0000] E    [/usr/local/cpanel/scripts/check_valid_server_hostname] The system will attempt to synchronize the current hostname “elevateitornot-3.openstack.build” to the system configuration. In the future, update your hostname in WHM’s (http://elevateitornot-3.openstack.build:2087/scripts2/changehostname) interface (Home » Networking Setup » Change Hostname).
[2022-03-02 00:38:23 +0000] E    [/usr/local/cpanel/scripts/check_valid_server_hostname] Cpanel::Exception::InvalidParameters/(XID sfunaf) The following parameters don't belong as an argument to notification_class(); you may have meant to pass these in constructor_args instead: priority
[2022-03-02 00:38:23 +0000] E    [/usr/local/cpanel/scripts/check_valid_server_hostname]  at /usr/local/cpanel/Cpanel/Notify.pm line 49.
[2022-03-02 00:38:23 +0000] E    [/usr/local/cpanel/scripts/check_valid_server_hostname]        Cpanel::Notify::notification_class("class", "Check::ValidServerHostname", "application", "server_hostname_validator", "status", "hostname_host_mismatch", "interval", 604800, ...) calledat /usr/local/cpanel/scripts/check_valid_server_hostname line 181
[2022-03-02 00:38:23 +0000] E    [/usr/local/cpanel/scripts/check_valid_server_hostname]        scripts::check_valid_server_hostname::send_failure_notification("scripts::check_valid_server_hostname", HASH(0x2b39040), HASH(0x2b1f708)) called at /usr/local/cpanel/scripts/check_valid_server_hostname line 118
[2022-03-02 00:38:23 +0000] E    [/usr/local/cpanel/scripts/check_valid_server_hostname]        scripts::check_valid_server_hostname::is_manual_hostname("scripts::check_valid_server_hostname", "elevateitornot-3.openstack.build", HASH(0x2b39040)) called at /usr/local/cpanel/scripts/check_valid_server_hostname line 44
[2022-03-02 00:38:23 +0000] E    [/usr/local/cpanel/scripts/check_valid_server_hostname]        scripts::check_valid_server_hostname::script("scripts::check_valid_server_hostname", ARRAY(0x257a300)) called at /usr/local/cpanel/scripts/check_valid_server_hostname line 190
[2022-03-02 00:38:23 +0000] E    [/usr/local/cpanel/scripts/check_valid_server_hostname] The “/usr/local/cpanel/scripts/check_valid_server_hostname --notify” command (process 53635) reported error number 255 when it ended.

Elevate fails at stage 4 with MariaDB 10.5 and 10.6

MariaDB 10.3 and MySQL 8 don't seem to impact elevate, but MariaDB 10.5 and 10.6 both seem to fail. I am seeing the following errors in /var/log/elevate-cpanel.log:

[2022-02-08 17:09:46 +0000] E The install encountered a fatal error: Invalid version: 5.42.32.70.72 at /usr/local/cpanel/Cpanel/MysqlUtils/Version.pm line 466.

Can't load '/usr/local/cpanel/3rdparty/perl/532/lib/perl5/cpanel_lib/x86_64-linux-64int/auto/GD/GD.so' for module GD: libpng15.so.15: cannot open shared object file: No such file or directory at /usr/local/cpanel/3rdparty/perl/532/lib/perl5/532/XSLoader.pm line 93. ^@ at /usr/local/cpanel/3rdparty/perl/532/lib/perl5/cpanel_lib/x86_64-linux-64int/GD.pm line 91. Compilation failed in require at /usr/local/cpanel/3rdparty/perl/532/lib/perl5/cpanel_lib/CSS/SpriteMaker.pm line 7. BEGIN failed--compilation aborted at /usr/local/cpanel/3rdparty/perl/532/lib/perl5/cpanel_lib/CSS/SpriteMaker.pm line 7. Compilation failed in require at /usr/local/cpanel/bin/sprite_generator line 12. BEGIN failed--compilation aborted at /usr/local/cpanel/bin/sprite_generator line 12.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.