Git Product home page Git Product logo

alivx / cis-ubuntu-20.04-ansible Goto Github PK

View Code? Open in Web Editor NEW
243.0 10.0 108.0 573 KB

Ansible Role to Automate CIS v1.1.0 Ubuntu Linux 18.04 LTS, 20.04 LTS Remediation

Home Page: https://alivx.github.io/CIS-Ubuntu-20.04-Ansible/

License: GNU General Public License v3.0

Shell 21.11% Jinja 26.62% HTML 52.27%
ansible ubuntu2004 cis-benchmark automation cisecurity hardening ubuntu cis security cis-benchmarks

cis-ubuntu-20.04-ansible's Introduction

Ansible CIS Ubuntu 20.04 LTS Hardening V1.1.0 Latest Build Status

CIS hardened Ubuntu: cyber attack and malware prevention for mission-critical systems CIS benchmarks lock down your systems by removing:

  1. non-secure programs.
  2. disabling unused filesystems.
  3. disabling unnecessary ports or services.
  4. auditing privileged operations.
  5. restricting administrative privileges.

CIS benchmark recommendations are adopted in virtual machines in public and private clouds. They are also used to secure on-premises deployments. For some industries, hardening a system against a publicly known standard is a criteria auditors look for. CIS benchmarks are often a system hardening choice recommended by auditors for industries requiring PCI-DSS and HIPPA compliance, such as banking, telecommunications and healthcare. If you are attempting to obtain compliance against an industry-accepted security standard, like PCI DSS, APRA or ISO 27001, then you need to demonstrate that you have applied documented hardening standards against all systems within the scope of assessment.

The Ubuntu CIS benchmarks are organised into different profiles, namely ‘Level 1’ and ‘Level 2’ intended for server and workstation environments.

A Level 1 profile is intended to be a practical and prudent way to secure a system without too much performance impact.

  • Disabling unneeded filesystems,
  • Restricting user permissions to files and directories,
  • Disabling unneeded services.
  • Configuring network firewalls.

A Level 2 profile is used where security is considered very important and it may have a negative impact on the performance of the system.

  • Creating separate partitions,
  • Auditing privileged operations

The Ubuntu CIS hardening tool allows you to select the desired level of hardening against a profile (Level1 or Level 2) and the work environment (server or workstation) for a system. Exmaple:

ansible-playbook -i inventory cis-ubuntu-20.yaml --tags="level_1_server"

You can list all tags by running the below command:

ansible-playbook -i host run.yaml --list-tags

I wrote all roles based on

CIS Ubuntu Linux 20.04 LTS Benchmark
v1.1.0 - 07-21-2020

Check Example dir


Requirements

You should carefully read through the tasks to make sure these changes will not break your systems before running this playbook.

You can download Free CIS Benchmark book from this URL Free Benchmark

To start working in this Role you just need to install Ansible. Installing Ansible


Role Variables

You have to review all default configuration before running this playbook, There are many role variables defined in defaults/main.yml.

  • If you are considering applying this role to any servers, you should have a basic familiarity with the CIS Benchmark and an appreciation for the impact that it may have on a system.
  • Read and change configurable default values.

Examples of config that should be immediately considered for exclusion:

5.1.8 Ensure cron is restricted to authorized users and 5.2.17 Ensure SSH access is limited, which by default effectively limit access to the host (including via ssh).

For example:

  • CIS-Ubuntu-20.04-Ansible/defaults/main.yml
#Section 5
#5.1.8 Ensure cron is restricted to authorized users
allowed_hosts: "ALL: 0.0.0.0/0.0.0.0, 192.168.2.0/255.255.255.0"
# 5.2.17 Ensure SSH access is limited
allowed_users: ali saleh baker root #Put None or list of users space between each user

If you need you to change file templates, you can find it under files/templates/*


Dependencies

  • Ansible version > 2.9

Example Playbook

Below an example of a playbook

---
- hosts: host1
  become: yes
  remote_user: root
  gather_facts: no
  roles:
    - { role: "CIS-Ubuntu-20.04-Ansible",}

Run all

If you want to run all tags use the below command:

ansible-playbook -i [inventoryfile] [playbook].yaml

Run specfic section

ansible-playbook -i host run.yaml -t section2

Run multi sections

ansible-playbook -i host run.yaml -t section2 -t 6.1.1
  • Note: When run an individual task be sure from the dependencies between tasks, for example, if you run tag 4.1.1.2 Ensure auditd service is enabled before running 4.1.1.1 Ensure auditd is installed you will get an error at the run time.

  • Points with Tilde not implemented yet, currently I'm working on it.

  • make sure to select one time service, for me I use ntp, but you can use other service such as [systemd-timesyncd,ntp,chrony] under the settings defaults/main.yaml

Testing 11/1/2020 Tested on AWS EC2 ubuntu 20.04 LTS [Pass] 11/1/2020 Tested on local Ubuntu 20.04 LTS server [Pass]

  • Before run make sure to update user list under defaults/main.yaml on list_of_os_users + allowed_users
  • Make sure to set the right subnet under defaults/main.yaml on allowd_hosts

Table of Roles:

1 Initial Setup

  • 1.1 Filesystem Configuration
  • 1.1.1 Disable unused filesystems
  • 1.1.1.1 Ensure mounting of cramfs filesystems is disabled (Automated)
  • 1.1.1.2 Ensure mounting of freevxfs filesystems is disabled - (Automated)
  • 1.1.1.3 Ensure mounting of jffs2 filesystems is disabled (Automated)
  • 1.1.1.4 Ensure mounting of hfs filesystems is disabled (Automated)
  • 1.1.1.5 Ensure mounting of hfsplus filesystems is disabled - (Automated)
  • 1.1.1.6 Ensure mounting of udf filesystems is disabled (Automated)
  • 1.1.1.7 Ensure mounting of FAT filesystems is limited (Manual)
  • 1.1.2 Ensure /tmp is configured (Automated)
  • 1.1.3 Ensure nodev option set on /tmp partition (Automated)
  • 1.1.4 Ensure nosuid option set on /tmp partition (Automated)
  • 1.1.5 Ensure noexec option set on /tmp partition (Automated)
  • 1.1.6 Ensure /dev/shm is configured (Automated)
  • 1.1.7 Ensure nodev option set on /dev/shm partition (Automated)
  • 1.1.8 Ensure nosuid option set on /dev/shm partition (Automated)
  • 1.1.9 Ensure noexec option set on /dev/shm partition (Automated)
  • 1.1.10 Ensure separate partition exists for /var (Automated)
  • 1.1.11 Ensure separate partition exists for /var/tmp (Automated)
  • 1.1.12 Ensure nodev option set on /var/tmp partition (Automated)
  • 1.1.13 Ensure nosuid option set on /var/tmp partition (Automated)
  • 1.1.14 Ensure noexec option set on /var/tmp partition (Automated)
  • 1.1.15 Ensure separate partition exists for /var/log (Automated)
  • 1.1.16 Ensure separate partition exists for /var/log/audit - (Automated)
  • 1.1.17 Ensure separate partition exists for /home (Automated)
  • 1.1.18 Ensure nodev option set on /home partition (Automated)
  • 1.1.19 Ensure nodev option set on removable media partitions (Manual)
  • 1.1.20 Ensure nosuid option set on removable media partitions - (Manual)
  • 1.1.21 Ensure noexec option set on removable media partitions - (Manual)
  • 1.1.22 Ensure sticky bit is set on all world-writable directories - (Automated)
  • 1.1.23 Disable Automounting (Automated)
  • 1.1.24 Disable USB Storage (Automated)

1.2 Configure Software Updates

  • 1.2.1 Ensure package manager repositories are configured (Manual)
  • 1.2.2 Ensure GPG keys are configured (Manual)

1.3 Filesystem Integrity Checking

  • 1.3.1 Ensure AIDE is installed (Automated)
  • 1.3.2 Ensure filesystem integrity is regularly checked (Automated)

1.4 Secure Boot Settings

  • 1.4.1 Ensure bootloader password is set (Automated)
  • 1.4.2 Ensure permissions on bootloader config are configured (Automated)
  • 1.4.3 Ensure authentication required for single user mode (Automated)

1.5 Additional Process Hardening

  • 1.5.1 Ensure XD/NX support is enabled (Automated)
  • 1.5.2 Ensure address space layout randomization (ASLR) is enabled (Automated)
  • 1.5.3 Ensure prelink is disabled (Automated)
  • 1.5.4 Ensure core dumps are restricted (Automated)

1.6 Mandatory Access Control

  • 1.6.1 Configure AppArmor
  • 1.6.1.1 Ensure AppArmor is installed (Automated)
  • 1.6.1.2 Ensure AppArmor is enabled in the bootloader configuration (Automated)
  • 1.6.1.3 Ensure all AppArmor Profiles are in enforce or complain mode (Automated)
  • 1.6.1.4 Ensure all AppArmor Profiles are enforcing (Automated)

1.7 Warning Banners

  • 1.7.1.1 Ensure message of the day is configured properly (Automated)
  • 1.7.1.2 Ensure local login warning banner is configured properly (Automated)
  • 1.7.1.3 Ensure remote login warning banner is configured properly (Automated)
  • 1.7.1.4 Ensure permissions on /etc/motd are configured (Automated)
  • 1.7.1.5 Ensure permissions on /etc/issue are configured (Automated)
  • 1.7.1.6 Ensure permissions on /etc/issue.net are configured (Automated)

1.8 GNOME Display Manager

  • 1.8.1 Ensure GNOME Display Manager is removed (Manual)
  • 1.8.2 Ensure GDM login banner is configured (Manual)
  • 1.8.3 Ensure disable-user-list is enabled (Manual)
  • 1.8.4 Ensure XDCMP is not enabled (Manual)

1.9 Ensure updates, patches, and additional security software are installed (Automated)

2 Services

  • 2.1 inetd Services
  • 2.1.1 Ensure xinetd is not installed (Automated)
  • 2.1.2 Ensure openbsd-inetd is not installed (Automated)
  • 2.2 Special Purpose Services
  • 2.2.1 Time Synchronization
  • 2.2.1.1 Ensure time synchronization is in use (Automated)
  • 2.2.1.2 Ensure systemd-timesyncd is configured (Manual)
  • 2.2.1.3 Ensure chrony is configured (Automated)
  • 2.2.1.4 Ensure ntp is configured (Automated)
  • 2.2.2 Ensure X Window System is not installed (Automated)
  • 2.2.3 Ensure Avahi Server is not installed (Automated)
  • 2.2.4 Ensure CUPS is not installed (Automated)
  • 2.2.5 Ensure DHCP Server is not installed (Automated)
  • 2.2.6 Ensure LDAP server is not installed (Automated)
  • 2.2.7 Ensure NFS is not installed (Automated)
  • 2.2.8 Ensure DNS Server is not installed (Automated)
  • 2.2.9 Ensure FTP Server is not installed (Automated)
  • 2.2.10 Ensure HTTP server is not installed (Automated)
  • 2.2.11 Ensure IMAP and POP3 server are not installed (Automated)
  • 2.2.12 Ensure Samba is not installed (Automated)
  • 2.2.13 Ensure HTTP Proxy Server is not installed (Automated)
  • 2.2.14 Ensure SNMP Server is not installed (Automated)
  • 2.2.15 Ensure mail transfer agent is configured for local-only mode - (Automated)
  • 2.2.16 Ensure rsync service is not installed (Automated)
  • 2.2.17 Ensure NIS Server is not installed (Automated)

2.3 Service Clients

  • 2.3.1 Ensure NIS Client is not installed (Automated)
  • 2.3.2 Ensure rsh client is not installed (Automated)
  • 2.3.3 Ensure talk client is not installed (Automated)
  • 2.3.4 Ensure telnet client is not installed (Automated)
  • 2.3.5 Ensure LDAP client is not installed (Automated)
  • 2.3.6 Ensure RPC is not installed (Automated)
  • 2.4 Ensure nonessential services are removed or masked (Manual)

3 Network Configuration

  • 3.1 Disable unused network protocols and devices
  • 3.1.1 Disable IPv6 (Manual)
  • 3.1.2 Ensure wireless interfaces are disabled (Automated)

3.2 Network Parameters (Host-Only)

  • 3.2.1 Ensure packet redirect sending is disabled (Automated)
  • 3.2.2 Ensure IP forwarding is disabled (Automated)

3.3 Network Parameters (Host and Router)

  • 3.3.1 Ensure source-routed packets are not accepted (Automated)
  • 3.3.2 Ensure ICMP redirects are not accepted (Automated)
  • 3.3.3 Ensure secure ICMP redirects are not accepted (Automated)
  • 3.3.4 Ensure suspicious packets are logged (Automated)
  • 3.3.5 Ensure broadcast ICMP requests are ignored (Automated)
  • 3.3.6 Ensure bogus ICMP responses are ignored (Automated)
  • 3.3.7 Ensure Reverse Path Filtering is enabled (Automated)
  • 3.3.8 Ensure TCP SYN Cookies is enabled (Automated)
  • 3.3.9 Ensure IPv6 router advertisements are not accepted (Automated)

3.4 Uncommon Network Protocols

  • 3.4.1 Ensure DCCP is disabled (Automated)
  • 3.4.2 Ensure SCTP is disabled (Automated)
  • 3.4.3 Ensure RDS is disabled (Automated)
  • 3.4.4 Ensure TIPC is disabled (Automated)

3.5 Firewall Configuration

  • 3.5.1 Configure UncomplicatedFirewall
  • 3.5.1.1 Ensure Uncomplicated Firewall is installed (Automated)
  • 3.5.1.2 Ensure iptables-persistent is not installed (Automated)
  • 3.5.1.3 Ensure ufw service is enabled (Automated)
  • 3.5.1.4 Ensure loopback traffic is configured (Automated)
  • 3.5.1.5 Ensure outbound connections are configured (Manual)
  • 3.5.1.6 Ensure firewall rules exist for all open ports (Manual)
  • 3.5.1.7 Ensure default deny firewall policy (Automated)
  • 3.5.2 Configure nftables
  • 3.5.2.1 Ensure nftables is installed (Automated)
  • 3.5.2.2 Ensure Uncomplicated Firewall is not installed or disabled - (Automated)
  • 3.5.2.3 Ensure iptables are flushed (Manual)
  • 3.5.2.4 Ensure a table exists (Automated)
  • 3.5.2.5 Ensure base chains exist (Automated)
  • 3.5.2.6 Ensure loopback traffic is configured (Automated)
  • 3.5.2.7 Ensure outbound and established connections are configured - (Manual)243
  • 3.5.2.8 Ensure default deny firewall policy (Automated)
  • 3.5.2.9 Ensure nftables service is enabled (Automated)
  • 3.5.2.10 Ensure nftables rules are permanent (Automated)
  • 3.5.3 Configure iptables
  • 3.5.3.1.1 Ensure iptables packages are installed (Automated)
  • 3.5.3.1.2 Ensure nftables is not installed (Automated)
  • 3.5.3.1.3 Ensure Uncomplicated Firewall is not installed or disabled - (Automated)
  • 3.5.3.2.1 Ensure default deny firewall policy (Automated)
  • 3.5.3.2.2 Ensure loopback traffic is configured (Automated)
  • 3.5.3.2.3 Ensure outbound and established connections are configured - (Manual)
  • 3.5.3.2.4 Ensure firewall rules exist for all open ports (Automated)
  • 3.5.3.3.1 Ensure IPv6 default deny firewall policy (Automated)
  • 3.5.3.3.2 Ensure IPv6 loopback traffic is configured (Automated)
  • 3.5.3.3.3 Ensure IPv6 outbound and established connections are - configured (Manual)
  • 3.5.3.3.4 Ensure IPv6 firewall rules exist for all open ports - (Manual)

4 Logging and Auditing

  • 4.1 Configure System Accounting (auditd)
  • 4.1.1 Ensure auditing is enabled
  • 4.1.1.1 Ensure auditd is installed (Automated)
  • 4.1.1.2 Ensure auditd service is enabled (Automated)
  • 4.1.1.3 Ensure auditing for processes that start prior to auditd is - enabled (Automated)
  • 4.1.1.4 Ensure audit_backlog_limit is sufficient (Automated)
  • 4.1.2 Configure Data Retention
  • 4.1.2.1 Ensure audit log storage size is configured (Automated)
  • 4.1.2.2 Ensure audit logs are not automatically deleted (Automated)
  • 4.1.2.3 Ensure system is disabled when audit logs are full - (Automated)
  • 4.1.3 Ensure events that modify date and time information are - collected (Automated)
  • 4.1.4 Ensure events that modify user/group information are collected - (Automated)
  • 4.1.5 Ensure events that modify the system's network environment are - collected (Automated)
  • 4.1.6 Ensure events that modify the system's Mandatory Access - Controls are collected (Automated)
  • 4.1.7 Ensure login and logout events are collected (Automated)
  • 4.1.8 Ensure session initiation information is collected (Automated)
  • 4.1.9 Ensure discretionary access control permission modification - events are collected (Automated)
  • 4.1.10 Ensure unsuccessful unauthorized file access attempts are - collected (Automated)
  • 4.1.11 Ensure use of privileged commands is collected (Automated)
  • 4.1.12 Ensure successful file system mounts are collected (Automated)
  • 4.1.13 Ensure file deletion events by users are collected (Automated)
  • 4.1.14 Ensure changes to system administration scope (sudoers) is - collected (Automated)
  • 4.1.15 Ensure system administrator command executions (sudo) are - collected (Automated)
  • 4.1.16 Ensure kernel module loading and unloading is collected - (Automated)
  • 4.1.17 Ensure the audit configuration is immutable (Automated)

4.2 Configure Logging

  • 4.2.1 Configure rsyslog
  • 4.2.1.1 Ensure rsyslog is installed (Automated)
  • 4.2.1.2 Ensure rsyslog Service is enabled (Automated)
  • 4.2.1.3 Ensure logging is configured (Manual)
  • 4.2.1.4 Ensure rsyslog default file permissions configured - (Automated)
  • 4.2.1.5 Ensure rsyslog is configured to send logs to a remote log - host (Automated)
  • 4.2.1.6 Ensure remote rsyslog messages are only accepted on - designated log hosts. (Manual)
  • 4.2.2 Configure journald
  • 4.2.2.1 Ensure journald is configured to send logs to rsyslog - (Automated)
  • 4.2.2.2 Ensure journald is configured to compress large log files - (Automated)
  • 4.2.2.3 Ensure journald is configured to write logfiles to - persistent disk (Automated)
  • 4.2.3 Ensure permissions on all logfiles are configured (Automated)
  • 4.3 Ensure logrotate is configured (Manual)
  • 4.4 Ensure logrotate assigns appropriate permissions (Automated)

5 Access, Authentication and Authorization

  • 5.1 Configure time-based job schedulers
  • 5.1.1 Ensure cron daemon is enabled and running (Automated)
  • 5.1.2 Ensure permissions on /etc/crontab are configured (Automated)
  • 5.1.3 Ensure permissions on /etc/cron.hourly are configured - (Automated)
  • 5.1.4 Ensure permissions on /etc/cron.daily are configured - (Automated)
  • 5.1.5 Ensure permissions on /etc/cron.weekly are configured - (Automated)
  • 5.1.6 Ensure permissions on /etc/cron.monthly are configured - (Automated)
  • 5.1.7 Ensure permissions on /etc/cron.d are configured (Automated)
  • 5.1.8 Ensure cron is restricted to authorized users (Automated)
  • 5.1.9 Ensure at is restricted to authorized users (Automated)

5.2 Configure SSH Server

  • 5.2.1 Ensure permissions on /etc/ssh/sshd_config are configured - (Automated)
  • 5.2.2 Ensure permissions on SSH private host key files are - configured (Automated)
  • 5.2.3 Ensure permissions on SSH public host key files are configured - (Automated)
  • 5.2.4 Ensure SSH LogLevel is appropriate (Automated)
  • 5.2.5 Ensure SSH X11 forwarding is disabled (Automated)
  • 5.2.6 Ensure SSH MaxAuthTries is set to 4 or less (Automated)
  • 5.2.7 Ensure SSH IgnoreRhosts is enabled (Automated)
  • 5.2.8 Ensure SSH HostbasedAuthentication is disabled (Automated)
  • 5.2.9 Ensure SSH root login is disabled (Automated)
  • 5.2.10 Ensure SSH PermitEmptyPasswords is disabled (Automated)
  • 5.2.11 Ensure SSH PermitUserEnvironment is disabled (Automated)
  • 5.2.12 Ensure only strong Ciphers are used (Automated)
  • 5.2.13 Ensure only strong MAC algorithms are used (Automated)
  • 5.2.14 Ensure only strong Key Exchange algorithms are used - (Automated)
  • 5.2.15 Ensure SSH Idle Timeout Interval is configured (Automated)
  • 5.2.16 Ensure SSH LoginGraceTime is set to one minute or less - (Automated)
  • 5.2.17 Ensure SSH access is limited (Automated)
  • 5.2.18 Ensure SSH warning banner is configured (Automated)
  • 5.2.19 Ensure SSH PAM is enabled (Automated)
  • 5.2.20 Ensure SSH AllowTcpForwarding is disabled (Automated)
  • 5.2.21 Ensure SSH MaxStartups is configured (Automated)
  • 5.2.22 Ensure SSH MaxSessions is limited (Automated)

5.3 Configure PAM

  • 5.3.1 Ensure password creation requirements are configured - (Automated)
  • 5.3.2 Ensure lockout for failed password attempts is configured - (Automated)
  • 5.3.3 Ensure password reuse is limited (Automated)
  • 5.3.4 Ensure password hashing algorithm is SHA-512 (Automated)

5.4 User Accounts and Environment

  • 5.4.1 Set Shadow Password Suite Parameters
  • 5.4.1.1 Ensure password expiration is 365 days or less (Automated)
  • 5.4.1.2 Ensure minimum days between password changes is configured - (Automated)
  • 5.4.1.3 Ensure password expiration warning days is 7 or more - (Automated)
  • 5.4.1.4 Ensure inactive password lock is 30 days or less (Automated)
  • 5.4.1.5 Ensure all users last password change date is in the past - (Automated)
  • 5.4.2 Ensure system accounts are secured (Automated)
  • 5.4.3 Ensure default group for the root account is GID 0 (Automated)
  • 5.4.4 Ensure default user umask is 027 or more restrictive - (Automated)
  • 5.4.5 Ensure default user shell timeout is 900 seconds or less - (Automated)
  • 5.5 Ensure root login is restricted to system console (Manual)
  • 5.6 Ensure access to the su command is restricted (Automated)

6 System Maintenance

  • 6.1 System File Permissions
  • 6.1.1 Audit system file permissions (Manual)
  • 6.1.2 Ensure permissions on /etc/passwd are configured (Automated)
  • 6.1.3 Ensure permissions on /etc/gshadow- are configured Automated)
  • 6.1.4 Ensure permissions on /etc/shadow are configured (Automated)
  • 6.1.5 Ensure permissions on /etc/group are configured (Automated)
  • 6.1.6 Ensure permissions on /etc/passwd- are configured (Automated)
  • 6.1.7 Ensure permissions on /etc/shadow- are configured (Automated)
  • 6.1.8 Ensure permissions on /etc/group- are configured (Automated)
  • 6.1.9 Ensure permissions on /etc/gshadow are configured (Automated)
  • 6.1.10 Ensure no world writable files exist (Automated)
  • 6.1.11 Ensure no unowned files or directories exist (Automated)
  • 6.1.12 Ensure no ungrouped files or directories exist (Automated)
  • 6.1.13 Audit SUID executables (Manual)
  • 6.1.14 Audit SGID executables (Manual)

6.2 User and Group Settings

  • 6.2.1 Ensure password fields are not empty (Automated)
  • 6.2.2 Ensure root is the only UID 0 account (Automated)
  • 6.2.3 Ensure root PATH Integrity (Automated)
  • 6.2.4 Ensure all users' home directories exist (Automated)
  • 6.2.5 Ensure users' home directories permissions are 750 or more - restrictive (Automated)
  • 6.2.6 Ensure users own their home directories (Automated)
  • 6.2.7 Ensure users' dot files are not group or world writable - (Automated)
  • 6.2.8 Ensure no users have .forward files (Automated)
  • 6.2.9 Ensure no users have .netrc files (Automated)
  • 6.2.10 Ensure users' .netrc Files are not group or world accessible - (Automated)
  • 6.2.11 Ensure no users have .rhosts files (Automated)
  • 6.2.12 Ensure aFor ll groups in /etc/passwd exist in /etc/group - (Automated)
  • 6.2.13 Ensure no duplicate UIDs exist (Automated)
  • 6.2.14 Ensure no duplicate GIDs exist (Automated)
  • 6.2.15 Ensure no duplicate user names exist (Automated)
  • 6.2.16 Ensure no duplicate group names exist (Automated)
  • 6.2.17 Ensure shadow group is empty (Automated)

Troubleshooting

  • If you want to run the playbook in the same machine, make sure to add this to run task:
- hosts: 127.0.0.1
  connection: local
  • if you faced issue with execut, try to run the playbook in another path, like /srv/.
  • For error like this stderr: chage: user 'ubuntu' does not exist in /etc/passwd, make sure to update config under CIS-Ubuntu-20.04-Ansible/defaults/main.yml
TASK [CIS-Ubuntu-20.04-Ansible : 1.4.1 Ensure AIDE is installed] ***********************************************************************************************************************************************************************************************************fatal: [192.168.80.129]: FAILED! => {"cache_update_time": 1611229159, "cache_updated": false, "changed": false, "msg": "'/usr/bin/apt-get -y -o \"Dpkg::Options::=--force-confdef\" -o \"Dpkg::Options::=--force-confold\"      install 'nullmailer' 'aide-common' 'aide' -o APT::Install-Recommends=no' failed: E: Could not get lock /var/lib/dpkg/lock-frontend. It is held by process 5194 (unattended-upgr)\nE: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it?\n", "rc": 100, "stderr": "E: Could not get lock /var/lib/dpkg/lock-frontend. It is held by process 5194 (unattended-upgr)\nE: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it?\n", "stderr_lines": ["E: Could not get lock /var/lib/dpkg/lock-frontend. It is held by process 5194 (unattended-upgr)", "E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it?"], "stdout": "", "stdout_lines": []}
  • For the above error you need to make sure there is not apt process running in the background, or you must wait until apt finish the process.
TASK [CIS-Ubuntu-20.04-Ansible : 5.4.1.1 Ensure password expiration is 365 days or less | chage] ***************************************************************************************************************************************************************************failed: [192.168.80.129] (item=ubuntu) => {"ansible_loop_var": "item", "changed": true, "cmd": ["chage", "--maxdays", "300", "ubuntu"], "delta": "0:00:00.005478", "end": "2021-01-21 12:49:45.463615", "item": "ubuntu", "msg": "non-zero return code", "rc": 1, "start": "2021-01-21 12:49:45.458137", "stderr": "chage: user 'ubuntu' does not exist in /etc/passwd", "stderr_lines": ["chage: user 'ubuntu' does not exist in /etc/passwd"], "stdout": "", "stdout_lines": []}
  • Make sure you set the right user under defaults/main.yaml

TASK [CIS-Ubuntu-20.04-Ansible : Creating users without admin access] ***************************************************************************************************************
fatal: [golden]: FAILED! => {"msg": "crypt.crypt not supported on Mac OS X/Darwin, install passlib python module"}

Install pip install passlib


License

GNU GENERAL PUBLIC LICENSE

Author Information

The role was originally developed by Ali Saleh Baker.

When contributing to this repository, please first discuss the change you wish to make via a GitHub issue, email, or via other channels with me :)

cis-ubuntu-20.04-ansible's People

Contributors

alivx avatar brandonharrisonhpe avatar daveshepherd avatar estenrye avatar gnought avatar iaunn avatar jmcshane avatar jvleminc avatar matteopolak avatar mgsotelo avatar robinlennox avatar rusox89 avatar sebastian-rg avatar victor-bolivar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cis-ubuntu-20.04-ansible's Issues

Missing DIY tag in #1.2.1 Ensure package manager repositories are configured

Hi,

I was checking out this repo and noticed that the - diy tag appears to be missing under section 1.2.1 (whilst the description you've typed clearly states that the specifics on patch update procedures are left to the organization.

Take a look here:

Just thought you wanted to know.

Thanks for the playbook and I hope you have a great day.

Implementation the missing points

Start implementing the following points:

  • - 1.1.10 Ensure separate partition exists for /var (Automated)
  • - 1.1.11 Ensure separate partition exists for /var/tmp (Automated)
  • - 1.1.12 Ensure nodev option set on /var/tmp partition (Automated)
  • - 1.1.13 Ensure nosuid option set on /var/tmp partition (Automated)
  • - 1.1.14 Ensure noexec option set on /var/tmp partition (Automated)
  • - 1.1.15 Ensure separate partition exists for /var/log (Automated)
  • - 1.1.16 Ensure separate partition exists for /var/log/audit - (Automated)
  • - 1.1.17 Ensure separate partition exists for /home (Automated)
  • - 1.1.18 Ensure nodev option set on /home partition (Automated)
  • - 1.1.19 Ensure nodev option set on removable media partitions (Manual)
  • - 1.1.20 Ensure nosuid option set on removable media partitions - (Manual)
  • - 1.1.21 Ensure noexec option set on removable media partitions - (Manual)
  • - 1.2.1 Ensure package manager repositories are configured (Manual)
  • - 1.2.2 Ensure GPG keys are configured (Manual)
  • - 1.5.1 Ensure bootloader password is set (Automated)
  • - 1.5.3 Ensure authentication required for single user mode (Automated)
  • - 2.2.1.2 Ensure systemd-timesyncd is configured (Manual)
  • - 2.2.1.3 Ensure chrony is configured (Automated)
  • - 2.4 Ensure nonessential services are removed or masked (Manual)
  • - 3.5.2 Configure nftables
  • - 3.5.2.1 Ensure nftables is installed (Automated)
  • - 3.5.2.2 Ensure Uncomplicated Firewall is not installed or disabled - (Automated)
  • - 3.5.2.3 Ensure iptables are flushed (Manual)
  • - 3.5.2.4 Ensure a table exists (Automated)
  • - 3.5.2.5 Ensure base chains exist (Automated)
  • - 3.5.2.6 Ensure loopback traffic is configured (Automated)
  • - 3.5.2.7 Ensure outbound and established connections are configured - (Manual)243
  • - 3.5.2.8 Ensure default deny firewall policy (Automated)
  • - 3.5.2.9 Ensure nftables service is enabled (Automated)
  • - 3.5.2.10 Ensure nftables rules are permanent (Automated)
  • - 3.5.3 Configure iptables
  • - 3.5.3.1.1 Ensure iptables packages are installed (Automated)
  • - 3.5.3.1.2 Ensure nftables is not installed (Automated)
  • - 3.5.3.1.3 Ensure Uncomplicated Firewall is not installed or disabled - (Automated)
  • - 3.5.3.2.1 Ensure default deny firewall policy (Automated)
  • - 3.5.3.2.2 Ensure loopback traffic is configured (Automated)
  • - 3.5.3.2.3 Ensure outbound and established connections are configured - (Manual)
  • - 3.5.3.2.4 Ensure firewall rules exist for all open ports (Automated)
  • - 3.5.3.3.1 Ensure IPv6 default deny firewall policy (Automated)
  • - 3.5.3.3.2 Ensure IPv6 loopback traffic is configured (Automated)
  • - 3.5.3.3.3 Ensure IPv6 outbound and established connections are - configured (Manual)
  • - 3.5.3.3.4 Ensure IPv6 firewall rules exist for all open ports - (Manual)
  • - 6.1.1 Audit system file permissions (Manual)
  • - 6.1.13 Audit SUID executables (Manual)
  • - 6.1.14 Audit SGID executables (Manual)
  • - 6.2.2 Ensure root is the only UID 0 account (Automated)
  • - 6.2.3 Ensure root PATH Integrity (Automated)
  • - 6.2.8 Ensure no users have .forward files (Automated)
  • - 6.2.9 Ensure no users have .netrc files (Automated)
  • - 6.2.10 Ensure users' .netrc Files are not group or world accessible - (Automated)
  • - 6.2.11 Ensure no users have .rhosts files (Automated)
  • - 6.2.12 Ensure all groups in /etc/passwd exist in /etc/group - (Automated)
  • - 6.2.13 Ensure no duplicate UIDs exist (Automated)
  • - 6.2.14 Ensure no duplicate GIDs exist (Automated)
  • - 6.2.15 Ensure no duplicate user names exist (Automated)
  • - 6.2.16 Ensure no duplicate group names exist (Automated)
  • - 6.2.17 Ensure shadow group is empty (Automated)

question: skipping: no hosts matched

Describe the bug
I'ts not a bug, but just a question because im unfamiliar with the ansible command. I try to run it on an separate host and getting this message:

root@server:/home/acc/UBUNTU22-CIS$ ansible-playbook site.yml
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does
not match 'all'
[WARNING]: Could not match supplied host pattern, ignoring: server.fqdn.de

PLAY [server.fqdn.de] *********************************************************************************
skipping: no hosts matched

my site.yml looks like:

---

- hosts: server.fqdn.de
  become: true
  roles:

      - role: "{{ playbook_dir }}"


What should I do to run the playbook on an remote host? Thanks in advance :-)

Small issues with numeric tags

TASK [CIS-Ubuntu-20.04-Ansible : 1.9 Ensure updates, patches, and additional security software are installed] ***********************************************
fatal: [node-1]: FAILED! => {"msg": "the field 'tags' should be a list of ((<type 'basestring'>,), <type 'int'>), but the item '1.9' is a <type 'float'>\n\nThe error appears to be in '.../CIS-Ubuntu-20.04-Ansible/tasks/section_1_Initial_Setup.yaml': line 873, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n# 1.9 Ensure updates, patches, and additional security software are installed\n- name: 1.9 Ensure updates, patches, and additional security software are installed\n  ^ here\n"}
...ignoring

TASK [CIS-Ubuntu-20.04-Ansible : 1.10 Ensure GDM is removed or login is configured] *************************************************************************
fatal: [node-1]: FAILED! => {"msg": "the field 'tags' should be a list of ((<type 'basestring'>,), <type 'int'>), but the item '1.1' is a <type 'float'>\n\nThe error appears to be in '.../CIS-Ubuntu-20.04-Ansible/tasks/section_1_Initial_Setup.yaml': line 883, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n# 1.10 Ensure GDM is removed or login is configured\n- name: 1.10 Ensure GDM is removed or login is configured\n  ^ here\n"}
...ignoring

Simple fix: in Tasks/section_1_Initial_Setup.yaml change the tags 1.9 and 1.10 to strings. :-)

Task 4.4 changes owner/group for log files to root/utmp

Task 4.4 states "Ensure logrotate assigns appropriate permissions"

In the CIS document the following example is given:

Edit /etc/logrotate.conf and update the create line to read 0640 or more restrictive,
following local site policy

Example:
create 0640 root utmp

Even though this is just an example, the "root utmp" is copied literally into the task and this leads to new log files getting wrong groups, which consecuently leads to logs not being able to be written to these files.

# 4.4 Ensure logrotate assigns appropriate permissions
# It is important to ensure that log files have the correct permissions to ensure that sensitive data is archived and protected.
- name: 4.4 Ensure logrotate assigns appropriate permissions
  lineinfile:
    dest: /etc/logrotate.conf
    regexp: "^create"
    line: "create 0640 root utmp"

The correct way should be just changing the file permissions, without changing the already set owner/group, something like:

    path: /etc/logrotate.conf
   regexp: '^create (.*) (.*)$'
   line: 'create 0640 \2'
   backrefs: yes
   ```

ansible 2.9

hi
ansible 2.9 does not support some option of you use in your playbook please upgrade it.
Thank you for your efforts.

Logrotate package

Describe the bug
An error message appears when checking 4.4

To Reproduce
Steps to reproduce the behavior:

  1. Clone the repository in a subdirectory and create a playbook containing the role CIS-Ubuntu-20.04-Ansible.
  2. Run playbook with tag level_1_server
  3. See error

Expected behavior
The install expects logrotate to be present, is that expected?
If so, should the package be installed by the role?

Screenshots

TASK [CIS-Ubuntu-20.04-Ansible : 4.4 Ensure logrotate assigns appropriate permissions] ****************************************************************************************************************************
fatal: [185.52.192.201]: FAILED! => {"changed": false, "msg": "Destination /etc/logrotate.conf does not exist !", "rc": 257}

RUNNING HANDLER [CIS-Ubuntu-20.04-Ansible : journald restart] *****************************************************************************************************************************************************

RUNNING HANDLER [CIS-Ubuntu-20.04-Ansible : rsyslog restart] ******************************************************************************************************************************************************

PLAY RECAP ********************************************************************************************************************************************************************************************************
185.52.192.201             : ok=153  changed=74   unreachable=0    failed=1    skipped=29   rescued=0    ignored=3   

Desktop (please complete the following information):

  • OS: ubuntu 20
  • python version: python 3.8.8
  • ansible version: core 2.11.6

Additional info:

Adding the package installation passed this step successfully

Duplicate setting of 'driftfile' param in chrony.conf

This doesn't look like it's anything important as it doesn't seem to break anything, however the following line is found twice inside the template for chrony "files/templates/chrony.conf.j2" :

driftfile {{ chrony_driftfile }}

My understanding is that this is only needed once?

Feature request: allow configuration of weaker protocols for ssh/scp use

In some cases, hardening of ssh/scp should allow the explicit use of weaker protocols, for instance when other servers that connect to the hardened server don't possess certain strong protocols and as a consequence, can't connect amymore.

This can be implemented by allowing the MAC and KexAlgorithms to be defined/overwritten from the configuration.

For instance:

SSH_MACs: "[email protected],[email protected],hmac-sha2-512,hmac-sha2-256,hmac-sha1-96"

ssh_key_algorithms: "curve25519-sha256,[email protected],diffie-hellman-group14-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1"

# 5.2.13 Ensure only strong MAC algorithms are used
# MD5 and 96-bit MAC algorithms are considered weak and have been shown to increase exploitability in SSH downgrade attacks. Weak algorithms continue to have a great deal of attention as a weak spot that can be exploited with expanded computing power. An attacker that breaks the algorithm could take advantage of a MiTM position to decrypt the SSH tunnel and capture credentials and information
- name: 5.2.13 Ensure only strong MAC algorithms are used
  lineinfile:
    state: present
    dest: /etc/ssh/sshd_config
    regexp: "^MACs"
    line: "MACs ${{ ssh_MACs }}"     <------------
  tags:
    - section5
    - level_1_server
    - level_1_workstation
    - 5.2.13

# 5.2.14 Ensure only strong Key Exchange algorithms are used
# Key exchange methods that are considered weak should be removed. A key exchange method may be weak because too few bits are used, or the hashing algorithm is considered too weak. Using weak algorithms could expose connections to man-in-the-middle attacks
- name: 5.2.14 Ensure only strong Key Exchange algorithms are used
  lineinfile:
    state: present
    dest: /etc/ssh/sshd_config
    regexp: "^KexAlgorithms"
    line: "KexAlgorithms ${{ ssh_key_algorithms }}   <------------
  tags:
    - section5
    - level_1_server
    - level_1_workstation
    - 5.2.14

If you agree with this, @alivx , I will create the PR.

-- please close this -- duplicate comment/bug report

When trying to apply control 3.3.4 in the conditional to check whether ufw is present before restarting the service, the condition will never be able to be assessed as the error is in stderr rather than in stdout. It was constantly failing:

  • name: 3.3.4 Ensure suspicious packets are logged | restart ufw after changes in /etc/ufw/sysctl.conf
    service:
    name: ufw
    state: restarted
    when:
    - UFWEnable
    - "'not found' not in ufw_check.stdout"

... thus the last line above should be replaced with the following line, after which it happily skips the control and completes without issues:
- "'not found' not in ufw_check.stderr"

5.4.1.1 - failure on fresh install

Failure on standard execution during task:
5.4.1.1 Ensure password expiration is 365 days or less | chage

stderr: chage: user 'ubuntu' does not exist in /etc/passed

Build:

  • OS: [Ubuntu 20.04 LTS]
  • python version [Python 3.8.5]
  • ansible version [2.9.6]

Terminal error message below:
image

scripts module failed

Describe the bug
Permission error when trying to run any tasks with the script module

To Reproduce
Steps to reproduce the behavior:

  1. Ubuntu 22.04
  2. Run playbook
  3. See error

Expected behavior
Permissions error when trying to run the scrips

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [Ubuntu 22.04]
  • python version [e.g. python 3.6.9]
  • ansible version [e.g. 2.9.4]

Additional context
adding to the script modules fixes the issue
args:
executable: /bin/sh

inventory_dir not defined, seemingly only on re-run.

Describe the bug
Re-ran this against an ubuntu 20.04 instance in AWS and got:

atal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'inventory_dir' is undefined\n\nThe error appears to be in '/tmp/CIS-Ubuntu-20.04-Ansible/tasks/section_1_Initial_Setup.yaml': line 873, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n# has to be defined in the variable "custom_motd_file_path".\n- name: 1.8.1.1 Ensure message of the day is configured properly\n ^ here\n"}

To Reproduce
Steps to reproduce the behavior:

  1. Run playbook with all tasks.
  2. Make some changes in mail.yaml.
  3. Re-Run playbook, got error.

Expected behavior
Expected it to complete without error.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):
Ansible 2.9.6
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.5 (default, Jul 28 2020, 12:59:40) [GCC 9.3.0]

Additional context
None.

[3.3.4] - conditional check fails when ufw service is not installed

When trying to apply control 3.3.4 in the conditional to check whether 'ufw' is present before restarting the service, the condition will never be able to be assessed as the error is in stderr rather than in stdout. It was constantly failing:

  • name: 3.3.4 Ensure suspicious packets are logged | restart ufw after changes in /etc/ufw/sysctl.conf
    service:
    name: ufw
    state: restarted
    when:
    - UFWEnable
    - "'not found' not in ufw_check.stdout"

... thus the last line above should be replaced with the following line, after which it happily skips the control and completes without issues:
- "'not found' not in ufw_check.stderr"

... and some more proof:

image

ssh_tunnel

Hi
after applying these hardening ssh tunnel does not work / for example i want to establish 10.10.10.10:8080 to linuxIP:8080 but does not work.

NX/XD support is not properly tested

Describe the bug
name: 1.5.1 Ensure XD/NX support is enabled
NX/XD support is currently tested with a simple grep on dmesg. NX will match on other modules (such as the string QNX4). NX/XD error comes up on some system and not on other systems with identical settings, just dependent on how long the system has been running (the ringbuffer with the messages is erased sometimes as soon as 2 days).

I found a more reliable way of doing it is by querying journalctl -k --boot | grep -E "NX (Execute Disable) protection" and then tail -1 and then test for disabled but it is not guaranteed, once the logs rotate out its gone.

One can test if the administrator has disabled noexec32 in the https://www.kernel.org/doc/Documentation/admin-guide/kernel-parameters.txt and one can test /proc/cpuinfo for whether NX is supported by the CPU. Not sure which approach to take for making sure it is enabled in a running kernel (by default it should be unless you're running some ancient hardware).

Also doesn't take into account ARM64 (eg. Raspberry Pi), XD is not a string that will match on current kernels
dmesg | grep NX with the expected response being "NX (Execute Disable) protection: active" is now standard for both AMD and Intel platforms according to the CIS Benchmark for Debian.

To Reproduce
Run the playbook on a host recently booted and one where dmesg has been cleared

Expected behavior
Test should reflect actual settings

Task-name heading from README.md and index.html file is not correctly maintained with respect to task yaml files

Describe the bug
The Task headings list maintained in the README.md and index.html files is not correct for some tasks.
This leads to incorrect mapping of task-tags while applying hardening.
This bug targets the text changes only.

To Reproduce
Steps to reproduce the behavior:

  1. Check the Table of Roles and check the respective Task tag from ./tasks/ directory.
    Example:
    image

Expected behavior
The Strings should be aligned in all the places. This will make sure that user is choosing the correct tag-id for his use.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):
Not Applicable

Additional context
Not Applicable

Error observed during remediation at step 1.7.1 Ensure message of the day is configured properly

Hi @alivx ,

I am facing a issue during remediation step where in I am getting the below error in step 1.7.1 Ensure message of the day is configured properly. I am not able to identify where I need to configure inventory_dir and custom_motd_file_path.

> 1.7.4 Ensure permissions on /etc/motd are configured] *************************************************************
fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: ['{{ custom_motd_file_path }}', 'files/templates/motd.j2']: {{ inventory_dir }}/custom_templates/motd_custom.txt: '**inventory_di**r**' is undefined**\n\nThe error appears to be in '/home/anupama/CIS-Ubuntu-20.04-Ansible/tasks/section_1_Initial_Setup.yaml': line 960, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n- block:\n  - name: \"1.7.1 Ensure message of the day is configured properly\\n\n    ^ here\nThis one looks easy to fix. It seems that there is a value started\nwith a quote, and the YAML parser is expecting to see the line ended\nwith the same kind of quote. For instance:\n\n    when: \"ok\" in result.stdout\n\nCould be written as:\n\n   when: '\"ok\" in result.stdout'\n\nOr equivalently:\n\n   when: \"'ok' in result.stdout\"\n"}

The folder structure is as advised on github

-run.yaml
-CIS-Ubuntu-20.04-Ansible

configuration of run.yaml

- name: Harden Server
  hosts: localhost  
  become: yes
  remote_user: root
  gather_facts: yes  
  roles:
     - CIS-Ubuntu-20.04-Ansible  

Thank you!

Issue en #5.2.14: superfluous space in sshd algorithms

In test 5.2.14:
Superfluous space in list of sshd algorithms in /etc/ssh/sshd_config breaks sshd: "diffie-hellman- group14-sha256"

    line: "KexAlgorithms curve25519-sha256,[email protected],diffie-hellman- group14-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256"

Tag 1.1.1.5 & 1.1.1.6 Issue

Describe the bug

  • 1.1.1.6 is tagged as level_1_server and level_1_workstation when it says in the CIS document that it's for level 2.
  • 1.1.1.6 is also tagged as 1.1.1.5. When running 1.1.1.5, 1.1.1.6 gets skipped. When running 1.1.1.6, there's no output.

To Reproduce

  1. Run ansible-playbook -i host run.yaml --tags="1.1.1.5"
  2. Run ansible-playbook -i host run.yaml --tags="1.1.1.6"

Expected behavior

Console output for running 1.1.1.5:

root@ip-xxx-xx-xx-xx:/srv# ansible-playbook -i host run.yaml --tags="1.1.1.5"
[DEPRECATION WARNING]: "include" is deprecated, use include_tasks/import_tasks instead. This feature will be removed in version 2.16.
 Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.

PLAY [127.0.0.1] *********************************************************************************************************************

TASK [CIS-Ubuntu-20.04-Ansible : 1.1.1.5 Ensure mounting of hfsplus filesystems is disabled] *****************************************
changed: [localhost]

TASK [CIS-Ubuntu-20.04-Ansible : 1.1.1.5 Ensure mounting of hfsplus filesystems is disabled | modprobe] ******************************
ok: [localhost]

TASK [CIS-Ubuntu-20.04-Ansible : 1.1.1.6 Ensure mounting of squashfs filesystems is disabled] ****************************************
skipping: [localhost]

TASK [CIS-Ubuntu-20.04-Ansible : 1.1.1.6 Ensure mounting of squashfs filesystems is disabled | modprobe] *****************************
skipping: [localhost]

PLAY RECAP ***************************************************************************************************************************
localhost                  : ok=2    changed=1    unreachable=0    failed=0    skipped=2    rescued=0    ignored=0 

Console output for running 1.1.1.6:

root@ip-xxx-xx-xx-xx:/srv# ansible-playbook -i host run.yaml --tags="1.1.1.6"
[DEPRECATION WARNING]: "include" is deprecated, use include_tasks/import_tasks instead. This feature will be removed in version 2.16.
 Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.

PLAY [127.0.0.1] *********************************************************************************************************************

PLAY RECAP ***************************************************************************************************************************

Desktop (please complete the following information):

  • OS: Ubuntu 20.04
  • Python 3.9.5
  • Ansible 2.12.1

Fix Python 2 Deprecation warnings in Travis CI build.

Describe the bug
Small change to Travis CI build file to use Python 3 in anticipation of pip 21.0 deprecating support for Python 2 in January 2021.

To Reproduce
Trigger a build in Travis CI

Expected behavior
Deprecation warning should not be visible.

Typo in

Got an error on section 5.2.21 that the ssh_max_startups variable was not set. I discovered that the variable defined in defaults/main.yml is defined as "ssh_max_Startups:". I changed the capital S to lower case and it resolved the error.

TASK [CIS-Ubuntu-20.04-Ansible : 5.2.21 Ensure SSH MaxStartups is configured to {{ ssh_max_startups }}] **********************************
fatal: [vm-cr-test-ubu-01]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'ssh_max_startups' is undefined\n\nThe error appears to be in '/home/crussell/workspace/CIS-Ubuntu-20.04-Ansible/tasks/section_5_Access_Authentication_and_Authorization.yaml': line 451, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n# To protect a system from denial of service due to a large number of pending authentication connection attempts, use the rate limiting function of MaxStartups to protect availability of sshd logins and prevent overwhelming the daemon.\n- name: \"5.2.21 Ensure SSH MaxStartups is configured to {{ ssh_max_startups }}\"\n  ^ here\nWe could be wrong, but this one looks like it might be an issue with\nmissing quotes. Always quote template expression brackets when they\nstart a value. For instance:\n\n    with_items:\n      - {{ foo }}\n\nShould be written as:\n\n    with_items:\n      - \"{{ foo }}\"\n"}

Fix ansible-lint Warnings

ansible-lint Warnings:
. [E303] mount used in place of mount module
. [E204] Lines should be no longer than 160 chars
. [E305] Use shell only when shell functionality is required
. [E303] systemctl used in place of systemd module
. [E305] Use shell only when shell functionality is required
. [E301] Commands should not change things if nothing needs doing
. [E306] Shells that use pipes should set the pipefail option
. [E502] All tasks should be named
. [E502] All tasks should be named
. [E502] All tasks should be named
. [E204] Lines should be no longer than 160 chars
. [E601] Don't compare to literal True/False
. [E301] Commands should not change things if nothing needs doing
. [E502] All tasks should be named
. [E502] All tasks should be named
. [E102] No Jinja2 in when
. [E102] No Jinja2 in when
. [E301] Commands should not change things if nothing needs doing
. [E602] Don't compare to empty string
. [E102] No Jinja2 in when
. [E102] No Jinja2 in when

tag-id mentioned in the README.md and index.html are not in sync with the tag-ids assigned in the Roles yaml tasks.

Describe the bug
The tag-ids mentioned against the Task listed in the Table of Roles are not in sync with actual task-ids used in the yaml declarations of the task.

This leads to incorrect hardening roles getting applied on the machine.

For example the tag-id 2.2.4 from README.md says it is for Ensure CUPS is not installed (Automated).
But if you check the code base then this tag-id is assigned for ./tasks/section_2_Services.yaml:449:- name: 2.2.4 Ensure telnet client is not installed

To Reproduce
Steps to reproduce the behavior:
grep the the tag-id 2.2.4 in the code repository itself.

Expected behavior
The tag-ids should be uniquely assigned to each role task and has to be aligned while mentioning in the README.md and index.html.

Screenshots
If applicable, add screenshots to help explain your problem.
image

Desktop (please complete the following information):
Not Applicable

Additional context
None

Travis CI Build fails to publish to ansible galaxy

Describe the bug
The Travis CI build uses the name of the repository to define the name of the role to upload.
The repository name contains characters that are not valid for an ansible role name.

To Reproduce
Steps to reproduce the behavior:

  1. Fork the repository.
  2. attempt to configure a Travis CI build.

Expected behavior
Forking the repository should result in a new repo with a working Travis CI build.

Workaround
Renaming the forked repository addresses naming incompatibility but results in failing tests as test.yml is hard coded to the repository name.

Missing Sudo Password Error

Describe the bug
When this role is ran including all of the tasks, after it configures sudo in section five, it can no longer perform any tasks resulting in the error "Missing sudo password" Also if I comment out them or restructure the order of operations (move that section of 5 to the bottom and run 6 before 5) I cannot ssh into the machine.

To Reproduce
Steps to reproduce the behavior:

  1. Call role and run all steps.

Expected behavior
Error: "Missing sudo password"

Desktop (please complete the following information):

  • OS: [e.g. ubuntu 22.04 (My Desktop) but running role on Ubuntu 20.04 server.]
  • python version [e.g. python 3.10.4]
  • ansible version [e.g. 2.10.8]

Additional context
Add any other context about the problem here.

Update about section to 20.04

Describe the bug
Easy one: about section lists 18.04 while the rest of the project has updated to 20.04.

To Reproduce
Steps to reproduce the behavior:

  1. Go to https://github.com/alivx/CIS-Ubuntu-20.04-Ansible
  2. Top right "about" text.
  3. Screen Shot 2021-05-14 at 12 30 14 PM

Expected behavior
Update text to 20.04.

Screenshots
Screen Shot 2021-05-14 at 12 30 14 PM

Desktop (please complete the following information):

  • all / web browser.

Additional context
n/a

paramater error on task 1.7.1.2

TASK [CIS-Ubuntu-20.04-Ansible : 1.7.1.2 Ensure AppArmor is enabled in the bootloader configuration] ************************************************************************************************************
fatal: [192.168.x.x]: FAILED! => {"changed": false, "msg": "Unsupported parameters for (replace) module: follow Supported parameters include: after, attributes, backup, before, encoding, group, mode, owner, path, regexp, replace, selevel, serole, setype, seuser, unsafe_writes, validate"}

Error while running the comamnd

ERROR! the role 'CIS-Ubuntu-20.04-Ansible' was not found in /opt/hardening/roles:/root/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles:/opt/hardening

The error appears to be in '/opt/hardening/run.yaml': line 7, column 7, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

roles:
- { role: "CIS-Ubuntu-20.04-Ansible" }
^ here
This one looks easy to fix. It seems that there is a value started
with a quote, and the YAML parser is expecting to see the line ended
with the same kind of quote. For instance:

when: "ok" in result.stdout

Could be written as:

when: '"ok" in result.stdout'

Or equivalently:

when: "'ok' in result.stdout"

Not An Issue - Education

I am a rookie with ansible. I have installed it and have cloned your repository.

I create a role using ansible-galaxy inti and copied all of the content into that directory.

Now I am getting an error
root@build-server:/etc/ansible/roles/CIS-Ubuntu-20.04-Ansible# ansible-playbook -i inventory run.yaml --list-tags
ERROR! Syntax Error while loading YAML.
did not find expected '-' indicator

The error appears to be in '/etc/ansible/roles/CIS-Ubuntu-20.04-Ansible/defaults/main.yml': line 12, column 1, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

Section 1 settings

disable_cramfs: yes
^ here

Thank you for your help

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.