Leapp is a tool provided by Red Hat for performing in-place upgrades of Red Hat Enterprise Linux (RHEL) systems. It helps users upgrade from one major version of RHEL to another without needing to reinstall the entire system1. This means you can move from, say, RHEL 7 to RHEL 8 or RHEL 8 to RHEL 9, while keeping your system configurations, custom repositories, and third-party applications intact.
Author: techhadoop
Ansible playbook to run a shell script on your remote hosts:
#!/bin/bash
# example_script.sh
echo "Hello, this is a sample script."
echo "Current date and time: $(date)"
ansible
---
- name: Run Shell Script on Remote Hosts
hosts: all
become: yes
tasks:
- name: Copy Shell Script to Remote Hosts
copy:
src: /path/to/example_script.sh
dest: /tmp/example_script.sh
mode: '0755'
- name: Run Shell Script
command: /tmp/example_script.sh
- name: Remove Shell Script from Remote Hosts
file:
path: /tmp/example_script.sh
state: absent
Explanation:
Copy Shell Script: The copy module is used to transfer the shell script from the local machine to the remote hosts. The mode parameter ensures that the script is executable.
Run Shell Script: The command module is used to execute the shell script on the remote hosts.
Remove Shell Script: The file module is used to delete the shell script from the remote hosts after execution to clean up.
Running the Playbook:
To run the playbook, use the following command:
initramfs
What is initramfs ?
initramfs stands for initial RAM filesystem. It plays a crucial role in the Linux boot process by providing a temporary root filesystem that is loaded into memory. This temporary root filesystem contains the necessary drivers, tools, and scripts needed to mount the real root filesystem and continue the boot process.
In simpler terms, it acts as a bridge between the bootloader and the main operating system, ensuring that the system has everything it needs to boot successfully.
Key Concepts of initramfs:
| Feature | Description |
|---|---|
| Temporary Filesystem | It’s loaded into memory as a temporary root filesystem. |
| Kernel Modules | Contains drivers (kernel modules) required to access disks, filesystems, and other hardware. |
| Scripts | Contains initialization scripts to prepare the system for booting the real root filesystem. |
| Critical Files | Includes essential tools like mount, udev, bash, and libraries. |
Key Functions of initramfs:
- Kernel Initialization: During the boot process, the Linux kernel loads the
initramfsinto memory. - Loading Drivers:
initramfsincludes essential drivers needed to access hardware components, such as storage devices and filesystems. - Mounting Root Filesystem: The primary function of
initramfsis to mount the real root filesystem from a storage device (e.g., hard drive, SSD). - Transitioning to Real Root: Once the real root filesystem is mounted, the
initramfstransitions control to the system’s maininitprocess, allowing the boot process to continue.
How initramfs Works:
- Bootloader Stage: The bootloader (e.g., GRUB) loads the Linux kernel and
initramfsinto memory. - Kernel Stage: The kernel initializes and mounts the
initramfsas the root filesystem. - Init Stage: The
initscript or program withininitramfsruns, performing tasks such as loading additional drivers, mounting filesystems, and locating the real root filesystem. - Switch Root: The
initramfsmounts the real root filesystem and switches control to it, allowing the system to boot normally.
Customizing initramfs:
You can customize the initramfs by including specific drivers, tools, and scripts. This is useful for scenarios where the default initramfs does not include the necessary components for your system.
Tools for Managing initramfs:
mkinitramfs: A tool to createinitramfsimages.update-initramfs: A tool to update existinginitramfsimages.
Difference Between initramfs and initrd
| Feature | initramfs | initrd |
|---|---|---|
| Format | cpio archive (compressed) | Disk image (block device) |
| Mounting | Extracted directly into RAM as a rootfs | Mounted as a loop device |
| Flexibility | More flexible and faster | Less flexible, older technology |
Location of initramfs
On most Linux distributions, the initramfs file is located in the /boot directory:
ls /boot/initramfs-*.img
How to Rebuild initramfs
If you’ve made changes to the kernel, /etc/fstab, or storage configuration (e.g., LUKS, LVM), you may need to rebuild the initramfs.
Rebuild initramfs on RHEL/CentOS:
sudo dracut -f
Rebuild initramfs on Ubuntu/Debian:
sudo update-initramfs -u
Common Issues Related to initramfs
| Issue | Cause | Solution |
|---|---|---|
| Dropped into initramfs shell | Kernel can’t find the root filesystem | Check /etc/fstab, rebuild initramfs, or fix missing drivers. |
| Boot failure after kernel update | Missing or corrupt initramfs | Rebuild initramfs. |
| Filesystem not mounting | Incorrect or missing drivers in initramfs | Ensure necessary drivers are included and rebuild. |
By understanding how initramfs works, you can better appreciate its role in the Linux boot process and customize it to suit your needs.
Upgrade RHEL 7 to RHEL 8 – ansible playbook
---
- name: Upgrade RHEL 7 to RHEL 8
hosts: all
become: true
vars:
leapp_repo: "rhel-7-server-extras-rpms"
tasks:
- name: Ensure system is up to date
yum:
name: '*'
state: latest
- name: Install the Leapp utility
yum:
name: leapp
state: present
enablerepo: "{{ leapp_repo }}"
- name: Run the pre-upgrade check with Leapp
shell: leapp preupgrade
register: leapp_preupgrade
ignore_errors: true
- name: Check for pre-upgrade issues
debug:
msg: "Leapp pre-upgrade check output: {{ leapp_preupgrade.stdout }}"
- name: Fix any issues identified by Leapp (manual step)
pause:
prompt: "Please review the pre-upgrade report at /var/log/leapp/leapp-report.txt and fix any blocking issues. Press Enter to continue."
- name: Proceed with the upgrade
shell: leapp upgrade
ignore_errors: true
- name: Reboot the server to complete the upgrade
reboot:
reboot_timeout: 1200
- name: Verify the OS version after reboot
shell: cat /etc/redhat-release
register: os_version
- name: Display the new OS version
debug:
msg: "The server has been successfully upgraded to: {{ os_version.stdout }}"
How to use LUKS data disk encryption in MapR
How to use LUKS data disk encryption in MapR
MapR (now part of HPE Ezmeral) supports encryption at various levels, but using LUKS (Linux Unified Key Setup) encryption for data disks is a system-level operation that must be done outside of MapR’s native encryption features. Here’s a step-by-step guide to set up LUKS disk encryption on a MapR node and ensure MapR can access the encrypted disk after unlocking it.
Steps to Set Up LUKS Disk Encryption for MapR Data Disks
1. Identify the Disk to Encrypt
Find the disk you want to encrypt using the lsblk or fdisk command:
bash
lsblk
fdisk -l
For example, if the disk to be encrypted is /dev/sdb, use that in the following steps.
2. Install the Necessary Packages
Ensure you have the required tools to set up LUKS encryption:
sudo apt-get install cryptsetup # For Ubuntu/Debian
sudo yum install cryptsetup # For CentOS/RHEL
3. Set Up LUKS Encryption on the Disk
Run the following command to initialize the disk with LUKS encryption:
sudo cryptsetup luksFormat /dev/sdb
You’ll be prompted to confirm the operation and set a passphrase.
⚠️ Warning: This will erase all existing data on the disk.
4. Open and Map the Encrypted Disk
Unlock the encrypted disk and map it to a device:
sudo cryptsetup open /dev/sdb mapr_data_disk
You can verify that the encrypted device is available:
lsblk
5. Format the Encrypted Disk
Format the newly mapped device with a filesystem that MapR supports (typically ext4 or xfs):
sudo mkfs.ext4 /dev/mapper/mapr_data_disk
6. Mount the Encrypted Disk
Create a mount point and mount the encrypted disk:
sudo mkdir -p /opt/mapr/data
sudo mount /dev/mapper/mapr_data_disk /opt/mapr/data
7. Make the Mount Persistent
Edit the /etc/crypttab file to automatically unlock the disk at boot:
echo “mapr_data_disk /dev/sdb none luks” | sudo tee -a /etc/crypttab
Then, add an entry to /etc/fstab to mount the disk automatically after it is unlocked:
echo “/dev/mapper/mapr_data_disk /opt/mapr/data ext4 defaults 0 0” | sudo tee -a /etc/fstab
8. Ensure MapR Can Access the Disk
Make sure the MapR user has the necessary permissions to access the encrypted disk:
sudo chown -R mapr:mapr /opt/mapr/data
9. Test the Setup
Reboot the system to ensure the encrypted disk is unlocked and mounted correctly:
sudo reboot
After the system reboots, verify that the disk is unlocked and mounted:
lsblk
df -h
10. Verify MapR Storage Pools
After the encrypted disk is mounted, add it to the MapR storage pool:
maprcli disk add -server <server_name> -disks /dev/mapper/mapr_data_disk
Additional Considerations
- Passphrase Management: Consider integrating with a key management system (KMS) to avoid manual passphrase entry.
- Performance Impact: Encryption may introduce some performance overhead, so test accordingly.
- Backup Configuration Files: Ensure you back up /etc/crypttab and /etc/fstab for disaster recovery.
Kong – add custom plugins (ansible playbook)
---
- name: Deploy and enable a custom plugin in Kong
hosts: kong_servers
become: yes
vars:
plugin_name: "my_custom_plugin"
plugin_source_path: "/path/to/local/plugin" # Local path to the plugin code
kong_plugin_dir: "/usr/local/share/lua/5.1/kong/plugins" # Default Kong plugin directory
tasks:
- name: Ensure Kong plugin directory exists
file:
path: "{{ kong_plugin_dir }}/{{ plugin_name }}"
state: directory
mode: '0755'
- name: Copy plugin files to Kong plugin directory
copy:
src: "{{ plugin_source_path }}/"
dest: "{{ kong_plugin_dir }}/{{ plugin_name }}/"
mode: '0644'
- name: Verify plugin files were copied
shell: ls -la "{{ kong_plugin_dir }}/{{ plugin_name }}"
register: verify_plugin_copy
- debug:
var: verify_plugin_copy.stdout
- name: Update Kong configuration to include the custom plugin
lineinfile:
path: "/etc/kong/kong.conf"
regexp: "^plugins ="
line: "plugins = bundled,{{ plugin_name }}"
state: present
notify: restart kong
- name: Verify the plugin is enabled
shell: kong config parse /etc/kong/kong.conf
register: config_check
- debug:
var: config_check.stdout
handlers:
- name: restart kong
service:
name: kong
state: restarted
LUKS – disks encrypt
#!/bin/bash
# Variables
DISKS=("/dev/sdb" "/dev/sdc") # List of disks to encrypt
KEYFILE="/etc/luks/keyfile" # Keyfile path
MOUNT_POINTS=("/mnt/disk1" "/mnt/disk2") # Corresponding mount points
# Check for root privileges
if [ "$(id -u)" -ne 0 ]; then
echo "This script must be run as root. Exiting."
exit 1
fi
# Create the keyfile if it doesn't exist
if [ ! -f "$KEYFILE" ]; then
echo "Creating LUKS keyfile..."
mkdir -p "$(dirname "$KEYFILE")"
dd if=/dev/urandom of="$KEYFILE" bs=4096 count=1
chmod 600 "$KEYFILE"
fi
# Function to encrypt and set up a disk
encrypt_disk() {
local DISK=$1
local MAPPER_NAME=$2
local MOUNT_POINT=$3
echo "Processing $DISK..."
# Check if the disk is already encrypted
if cryptsetup isLuks "$DISK"; then
echo "$DISK is already encrypted. Skipping."
return
fi
# Format the disk with LUKS encryption
echo "Encrypting $DISK..."
cryptsetup luksFormat "$DISK" "$KEYFILE"
if [ $? -ne 0 ]; then
echo "Failed to encrypt $DISK. Exiting."
exit 1
fi
# Open the encrypted disk
echo "Opening $DISK..."
cryptsetup luksOpen "$DISK" "$MAPPER_NAME" --key-file "$KEYFILE"
# Create a filesystem on the encrypted disk
echo "Creating filesystem on /dev/mapper/$MAPPER_NAME..."
mkfs.ext4 "/dev/mapper/$MAPPER_NAME"
# Create the mount point if it doesn't exist
mkdir -p "$MOUNT_POINT"
# Add entry to /etc/fstab for automatic mounting
echo "Adding $DISK to /etc/fstab..."
UUID=$(blkid -s UUID -o value "/dev/mapper/$MAPPER_NAME")
echo "UUID=$UUID $MOUNT_POINT ext4 defaults 0 2" >> /etc/fstab
# Mount the disk
echo "Mounting $MOUNT_POINT..."
mount "$MOUNT_POINT"
}
# Loop through disks and encrypt each one
for i in "${!DISKS[@]}"; do
DISK="${DISKS[$i]}"
MAPPER_NAME="luks_disk_$i"
MOUNT_POINT="${MOUNT_POINTS[$i]}"
encrypt_disk "$DISK" "$MAPPER_NAME" "$MOUNT_POINT"
done
echo "All disks have been encrypted and mounted."
EKS – subnet size
To determine the appropriate subnet class for an Amazon EKS (Elastic Kubernetes Service) cluster with 5 nodes, it’s important to account for both the nodes and the additional IP addresses needed for pods and other resources. Here’s a recommended approach:
Calculation and Considerations:
- EKS Node IP Addresses:
- Each node will need its own IP address.
- For 5 nodes, that’s 5 IP addresses.
- Pod IP Addresses:
- By default, the Amazon VPC CNI plugin assigns one IP address per pod from the node’s subnet.
- The number of pods per node depends on your instance type and the configuration of your Kubernetes cluster.
- For example, if you expect each node to host around 20 pods, you’ll need approximately 100 IP addresses for pods.
- Additional Resources:
- Include IP addresses for other resources like load balancers, services, etc.
Subnet Size Recommendation:
A /24 subnet provides 254 usable IP addresses, which is typically sufficient for a small EKS cluster with 5 nodes.
Example Calculation:
- Nodes: 5 IP addresses
- Pods: 100 IP addresses (assuming 20 pods per node)
- Additional Resources: 10 IP addresses (for services, load balancers, etc.)
Total IP Addresses Needed: 5 (nodes) + 100 (pods) + 10 (resources) = 115 IP addresses.
Recommended Subnet Size:
A /24 subnet should be sufficient for this setup:
- CIDR Notation: 192.168.0.0/24
- Total IP Addresses: 256
- Usable IP Addresses: 254
Example Configuration:
- Subnet 1: 192.168.0.0/24
Reasons to Choose a Bigger Subnet (e.g., /22 or /20):
- Future Scalability: If you anticipate significant growth in the number of nodes or pods, a larger subnet will provide ample IP addresses for future expansion without the need to reconfigure your network.
- Flexibility: More IP addresses give you flexibility to add additional resources such as load balancers, services, or new applications.
- Avoiding Exhaustion: Ensuring you have a large pool of IP addresses can prevent issues related to IP address exhaustion, which can disrupt your cluster’s operations.
Example Subnet Sizes:
- /22 Subnet:
- Total IP Addresses: 1,024
- Usable IP Addresses: 1,022
- /20 Subnet:
- Total IP Addresses: 4,096
- Usable IP Addresses: 4,094
When to Consider Smaller Subnets (e.g., /24):
- Small Deployments: If your EKS cluster is small and you do not expect significant growth, a /24 subnet might be sufficient.
- Cost Efficiency: Smaller subnets can sometimes be more cost-effective in environments where IP address scarcity is not a concern.
For an EKS cluster with 5 nodes, I would recommend going with a /22 subnet. This gives you a healthy margin of IP addresses for your nodes, pods, and additional resources while providing room for future growth.
Kong – add admin user
To add an admin to Kong Manager via the Admin API, you’ll need to follow these steps:
# Disable RBAC
curl -X PUT http://localhost:8001/rbac/enable --data '{"enabled": false}'
# Create a new admin user
curl -X POST http://localhost:8001/admins \
--header "Content-Type: application/json" \
--data '{
"username": "newadmin",
"email": "newadmin@example.com",
"password": "yourpassword"
}'
# Assign roles to the new admin
curl -X POST http://localhost:8001/admins/newadmin/roles \
--header "Content-Type: application/json" \
--data '{
"roles": ["super-admin"]
}'
# Enable RBAC
curl -X PUT http://localhost:8001/rbac/enable --data '{"enabled": true}'
Encrypt multiple disks with LUKS
---
- name: Encrypt multiple disks with LUKS
hosts: all
become: yes
vars:
luks_disks: # List of disks to encrypt
- /dev/sdb
- /dev/sdc
luks_password: secret_password # Replace or use a vault/encrypted variable
mount_points: # List of mount points corresponding to the disks
- /mnt/disk1
- /mnt/disk2
tasks:
- name: Ensure required packages are installed
ansible.builtin.yum:
name:
- cryptsetup
state: present
- name: Create LUKS encryption on disks
ansible.builtin.command:
cmd: "echo {{ luks_password }} | cryptsetup luksFormat {{ item }} -q"
loop: "{{ luks_disks }}"
ignore_errors: no
- name: Open LUKS-encrypted disks
ansible.builtin.command:
cmd: "echo {{ luks_password }} | cryptsetup luksOpen {{ item }} luks_{{ item | regex_replace('/dev/', '') }}"
loop: "{{ luks_disks }}"
- name: Format the LUKS-encrypted devices with ext4 filesystem
ansible.builtin.command:
cmd: "mkfs.ext4 /dev/mapper/luks_{{ item | regex_replace('/dev/', '') }}"
loop: "{{ luks_disks }}"
- name: Create mount points
ansible.builtin.file:
path: "{{ item }}"
state: directory
loop: "{{ mount_points }}"
- name: Mount the LUKS devices to mount points
ansible.builtin.mount:
path: "{{ item.1 }}"
src: "/dev/mapper/luks_{{ item.0 | regex_replace('/dev/', '') }}"
fstype: ext4
state: mounted
loop: "{{ luks_disks | zip(mount_points) | list }}"
- name: Add entries to /etc/crypttab
ansible.builtin.lineinfile:
path: /etc/crypttab
line: "luks_{{ item | regex_replace('/dev/', '') }} {{ item }} none luks"
loop: "{{ luks_disks }}"
create: yes
- name: Add entries to /etc/fstab
ansible.builtin.lineinfile:
path: /etc/fstab
line: "/dev/mapper/luks_{{ item.0 | regex_replace('/dev/', '') }} {{ item.1 }} ext4 defaults 0 0"
loop: "{{ luks_disks | zip(mount_points) | list }}"
create: yes
a
## output
Processing /dev/sdc...
Encrypting /dev/sdc...
WARNING!
========
This will overwrite data on /dev/sdc irrevocably.
Are you sure? (Type 'yes' in capital letters): YES
Opening /dev/sdc...
Device luks_disk_0 already exists.
Creating filesystem on /dev/mapper/luks_disk_0...
mke2fs 1.46.5 (30-Dec-2021)
/dev/mapper/luks_disk_0 is mounted; will not make a filesystem here!
Adding /dev/sdc to /etc/fstab...
Mounting /mnt/disk2...
mount: (hint) your fstab has been modified, but systemd still uses
the old version; use 'systemctl daemon-reload' to reload.
Processing /dev/sdd...
Encrypting /dev/sdd...
WARNING!
========
This will overwrite data on /dev/sdd irrevocably.
Are you sure? (Type 'yes' in capital letters): YES
Opening /dev/sdd...
Creating filesystem on /dev/mapper/luks_disk_1...
mke2fs 1.46.5 (30-Dec-2021)
Creating filesystem with 2617344 4k blocks and 655360 inodes
Filesystem UUID: d0bb5504-abf9-4e00-8670-59d8fa92b883
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
Adding /dev/sdd to /etc/fstab...
Mounting /mnt/disk3...
mount: (hint) your fstab has been modified, but systemd still uses
the old version; use 'systemctl daemon-reload' to reload.
All disks have been encrypted and mounted.