AZ- VM types of “stress”: CPU, Memory, and I/O

If you need to put load on an Azure VM for testing (like checking if your Azure Monitor Alerts or Autoscale settings are working), using a Perl script is a classic, lightweight way to do it.

Below are three scripts tailored for different types of “stress”: CPU, Memory, and I/O.


1. CPU Load Script

This script performs high-intensity mathematical calculations in a loop to pin the processor.

Perl

#!/usr/bin/perl
# CPU Stress Test
use strict;
use warnings;
print "Starting CPU Load... Press Ctrl+C to stop.\n";
# This will run on one core.
# To stress multiple cores, run this script multiple times in the background.
while (1) {
my $x = sqrt(rand(1000000)) * sin(rand(1000000));
}
  • Pro Tip: If your VM has 4 cores and you want to hit 100% total CPU, run this 4 times: perl cpu_load.pl & perl cpu_load.pl & perl cpu_load.pl & perl cpu_load.pl &

2. Memory (RAM) Load Script

This script creates a massive string and keeps adding to it to consume available RAM. Warning: Be careful with this; if it consumes all RAM, the Linux OOM (Out of Memory) killer might crash the VM.

Perl

#!/usr/bin/perl
# Memory Stress Test
use strict;
use warnings;
print "How many MB of RAM should I consume? ";
my $mb_to_hit = <STDIN>;
chomp($mb_to_hit);
my $data = "";
my $chunk = "A" x (1024 * 1024); # 1MB string chunk
print "Allocating memory...\n";
for (1..$mb_to_hit) {
$data .= $chunk;
print "Currently holding approx $_ MB\n" if $_ % 100 == 0;
}
print "Memory allocated. Press Enter to release memory and exit.";
<STDIN>;

3. I/O (Disk) Load Script

This script continuously writes and deletes a file to stress the Virtual Machine’s disk IOPS (Input/Output Operations Per Second).

Perl

#!/usr/bin/perl
# Disk I/O Stress Test
use strict;
use warnings;
my $filename = "test_load_file.tmp";
print "Starting Disk I/O load... Press Ctrl+C to stop.\n";
while (1) {
open(my $fh, '>', $filename) or die "Could not open file: $!";
print $fh "This is a stress test line\n" x 10000;
close $fh;
unlink($filename); # Deletes the file immediately to repeat the write
}

💡 The “Cloud Native” Alternative: stress-ng

While Perl scripts are great, most Azure Engineers use a tool called stress-ng. It is purpose-built for this and gives you much more granular control over exactly how many cores or how much RAM you hit.

To install and run (Ubuntu/Debian):

Bash

sudo apt update && sudo apt install stress-ng -y
# Stress 2 CPUs for 60 seconds
stress-ng --cpu 2 --timeout 60s
# Stress 1GB of RAM
stress-ng --vm 1 --vm-bytes 1G --timeout 60s

🛑 Important Reminder

When putting load on a VM, keep a separate window open with the command top or htop (if installed) to monitor the resource usage in real-time. If you are testing Azure Autoscale, remember that it usually takes 5–10 minutes for the Azure portal to reflect the spike and trigger the scaling action!

Kong HA

Setting up Kong Gateway with high availability (HA) on-premise on bare metal servers involves several steps. Below is a comprehensive guide to achieve this setup:

Prerequisites

  1. Bare Metal Servers: Ensure you have multiple physical servers available.
  2. Network Configuration: Ensure all servers are on the same network and can communicate with each other.
  3. Data Store: Kong Gateway requires a shared data store like PostgreSQL or Cassandra. Ensure you have a highly available setup for your data store.
  4. Load Balancer: A hardware or software load balancer to distribute traffic across Kong Gateway nodes.

Step-by-Step Guide

1. Install PostgreSQL for the Shared Data Store

  1. Install PostgreSQL:

sudo apt-get update

sudo apt-get install -y postgresql postgresql-contrib

  1. Configure PostgreSQL for High Availability:
    • Set up replication between multiple PostgreSQL instances.
    • Ensure that the primary and standby instances are configured correctly.
  2. Create a Kong Database:

sudo -u postgres psql

CREATE DATABASE kong;

CREATE USER kong WITH PASSWORD ‘yourpassword’;

GRANT ALL PRIVILEGES ON DATABASE kong TO kong;

\q

2. Install Kong Gateway on Each Server

  1. Install Kong Gateway:

sudo apt-get update

sudo apt-get install -y apt-transport-https

curl -s https://packages.konghq.com/keys/kong.key | sudo apt-key add –

echo “deb https://packages.konghq.com/debian/ $(lsb_release -sc) main” | sudo tee -a /etc/apt/sources.list

sudo apt-get update

sudo apt-get install -y kong

  1. Configure Kong Gateway:
    • Create a kong.conf file on each server with the following configuration:

database = postgres

pg_host = <primary_postgresql_host>

pg_port = 5432

pg_user = kong

pg_password = yourpassword

pg_database = kong

  1. Start Kong Gateway:

kong migrations bootstrap

kong start

3. Configure Load Balancer

  1. Set Up a Load Balancer:
    • Configure your load balancer to distribute traffic across the Kong Gateway nodes.
    • Ensure the load balancer is set up for high availability (e.g., using a failover IP or DNS).
  2. Configure Health Checks:
    • Configure health checks on the load balancer to monitor the health of each Kong Gateway node.
    • Ensure that traffic is only sent to healthy nodes.

4. Set Up Failover Mechanism

  1. Database Failover:
    • Ensure your PostgreSQL setup has a failover mechanism in place (e.g., using Patroni or pgpool-II).
  2. Kong Gateway Failover:
    • Ensure that the load balancer can detect when a Kong Gateway node is down and redirect traffic to other nodes.

5. Implement Monitoring and Alerts

  1. Set Up Monitoring:
    • Use tools like Prometheus and Grafana to monitor the health and performance of your Kong Gateway nodes and PostgreSQL database.
  2. Set Up Alerts:
    • Configure alerts to notify you of any issues with the Kong Gateway nodes or the PostgreSQL database.

Example Configuration Files

PostgreSQL Configuration (pg_hba.conf):

# TYPE  DATABASE        USER            ADDRESS                 METHOD

host    kong            kong            192.168.1.0/24          md5

Kong Gateway Configuration (kong.conf):

database = postgres

pg_host = 192.168.1.10

pg_port = 5432

pg_user = kong

pg_password = yourpassword

pg_database = kong

Summary

By following these steps, you can set up a highly available Kong Gateway on bare metal servers. This setup ensures that your API gateway remains reliable and performs well under various conditions. Make sure to thoroughly test your setup to ensure that failover and load balancing work as expected.

Setting Up a YUM Repository (RHEL/CentOS)

A. Setting Up a YUM Repository (RHEL/CentOS)

1. Install and Configure a Web Server

Most repositories are served via HTTP. In this example, we’ll use Apache (httpd):

  1. Install Apache:bashCopyEditsudo yum install httpd -y
  2. Enable and start Apache:bashCopyEditsudo systemctl enable httpd sudo systemctl start httpd
  3. Verify that Apache is running:
    Open your browser and navigate to http://<your-server-IP>/ to see the Apache welcome page.

2. Create the Repository Directory

  1. Make a directory to hold your repository files:bashCopyEditsudo mkdir -p /var/www/html/myrepo
  2. Copy your RPM packages into this directory:bashCopyEditsudo cp /path/to/your/rpms/*.rpm /var/www/html/myrepo/

3. Generate Repository Metadata

  1. Install the createrepo tool:bashCopyEditsudo yum install createrepo -y
  2. Run createrepo in your repository directory:bashCopyEditsudo createrepo /var/www/html/myrepo This command creates a repodata directory with all the necessary metadata for the repository.

4. Set Proper Permissions

  1. Ensure Apache can read the files:bashCopyEditsudo chmod -R 755 /var/www/html/myrepo
  2. If SELinux is enabled, adjust the file context:bashCopyEditsudo chcon -R -t httpd_sys_content_t /var/www/html/myrepo

5. (Optional) Configure the Firewall

Make sure HTTP traffic is allowed:

  • For firewalld:bashCopyEditsudo firewall-cmd --permanent --add-service=http sudo firewall-cmd --reload

LEAPP upgrade – multiple kernel issue

Newest installed kernel not in use

If the newest installed kernel is not in use even after rebooting, it typically means your system is booting into an older kernel version. This happens due to the default GRUB settings or other configuration issues.

Here’s how you can resolve the issue step-by-step:


1. Verify the Currently Running Kernel

  • Check the kernel currently in use:

uname -r

  • Example output:

CopyEdit

3.10.0-1127.el7.x86_64

  • Compare this with the newest installed kernel version (see step 2).

2. List All Installed Kernels

  • List all installed kernel packages:

rpm -qa | grep kernel | grep -v firmware

  • Example output:

kernel-3.10.0-1127.el7.x86_64

kernel-3.10.0-1160.el7.x86_64

  • Identify the newest kernel (e.g., 3.10.0-1160.el7.x86_64).

3. Check GRUB Default Boot Entry

  • Display the current GRUB default boot entry:

grub2-editenv list

  • You should see something like this:

saved_entry=0

  • If the saved_entry is pointing to an older kernel, the system is booting into it by default.

4. Update GRUB to Use the Newest Kernel

Option A: Use the First Kernel (Default GRUB Behavior)

  • Edit the GRUB configuration:

sudo vi /etc/default/grub

  • Set the GRUB_DEFAULT value to 0 (the first kernel in the GRUB list):

GRUB_DEFAULT=0

  • Save and exit the file.
  • Regenerate the GRUB configuration:

sudo grub2-mkconfig -o /boot/grub2/grub.cfg

Option B: Manually Specify the Newest Kernel

  • Run this command to view all available kernel entries:

awk -F\’ ‘$1==”menuentry ” {print i++ ” : ” $2}’ /boot/grub2/grub.cfg

  • Example output:

0 : CentOS Linux (3.10.0-1160.el7.x86_64)

1 : CentOS Linux (3.10.0-1127.el7.x86_64)

2 : CentOS Linux (0-rescue-…)

  • Set GRUB_DEFAULT to the corresponding entry for the newest kernel (e.g., 0):

sudo grub2-set-default 0

  • Verify the setting:

grub2-editenv list


5. Reboot the System

  • Reboot your system to load the correct kernel:

sudo reboot

  • After reboot, confirm that the new kernel is in use:

uname -r


6. Remove Old Kernels (Optional)

  • To prevent confusion in the future, you can remove unused older kernels:

sudo package-cleanup –oldkernels –count=1

  • This retains only the most recent kernel.

7. Troubleshooting

  • Manually Select the Kernel: If the system still boots into the wrong kernel, you can manually select the desired kernel during the GRUB menu at boot time. To enable the GRUB menu:
    1. Edit /etc/default/grub:

sudo vi /etc/default/grub

  • Set GRUB_TIMEOUT to a non-zero value (e.g., 5):

GRUB_TIMEOUT=5

  • Save and regenerate the GRUB configuration:

sudo grub2-mkconfig -o /boot/grub2/grub.cfg

  • Kernel Missing After Update: Ensure the new kernel is properly installed. Reinstall it if necessary:

sudo yum install kernel

Ansible – host list

The error “host list declined parsing host file as it did not pass its verify_file() method” occurs when Ansible cannot properly parse your inventory file. This is typically due to an invalid format or incorrect file structure. Here’s how to troubleshoot and fix it:


Step 1: Verify Inventory File Format

Correct INI Format Example

Ensure your inventory file uses the proper INI-style syntax:

[webservers]

web1 ansible_host=192.168.1.10 ansible_user=root

web2 ansible_host=192.168.1.11 ansible_user=root

[dbservers]

db1 ansible_host=192.168.1.20 ansible_user=root

Correct YAML Format Example

If using a YAML-based inventory file, ensure it follows the correct structure:

all:

  hosts:

    web1:

      ansible_host: 192.168.1.10

      ansible_user: root

    web2:

      ansible_host: 192.168.1.11

      ansible_user: root

  children:

    dbservers:

      hosts:

        db1:

          ansible_host: 192.168.1.20

          ansible_user: root


Step 2: Check File Extension

  • For INI-style inventory, use .ini or no extension (e.g., inventory).
  • For YAML-style inventory, use .yaml or .yml.

Step 3: Test the Inventory File

Use the ansible-inventory command to validate the inventory file:

ansible-inventory -i inventory –list

  • Replace inventory with the path to your inventory file.
  • If there’s an error, it will provide details about what’s wrong.

 Step 4: Use Explicit Inventory Path

When running an Ansible command or playbook, explicitly specify the inventory file:

ansible all -i /path/to/inventory -m ping


 Step 5: Check Syntax Errors

  1. Ensure there are no trailing spaces or unexpected characters in your inventory file.
  2. Use a linter for YAML files if you suspect a YAML formatting issue:

yamllint /path/to/inventory.yaml


 Step 6: Set Permissions

Ensure the inventory file has the correct permissions so that Ansible can read it:

chmod 644 /path/to/inventory

Ansible playbook to run a shell script on your remote hosts:

#!/bin/bash
# example_script.sh

echo "Hello, this is a sample script."
echo "Current date and time: $(date)"

ansible

---
- name: Run Shell Script on Remote Hosts
  hosts: all
  become: yes
  tasks:
    - name: Copy Shell Script to Remote Hosts
      copy:
        src: /path/to/example_script.sh
        dest: /tmp/example_script.sh
        mode: '0755'

    - name: Run Shell Script
      command: /tmp/example_script.sh

    - name: Remove Shell Script from Remote Hosts
      file:
        path: /tmp/example_script.sh
        state: absent

Explanation:
Copy Shell Script: The copy module is used to transfer the shell script from the local machine to the remote hosts. The mode parameter ensures that the script is executable.

Run Shell Script: The command module is used to execute the shell script on the remote hosts.

Remove Shell Script: The file module is used to delete the shell script from the remote hosts after execution to clean up.

Running the Playbook:
To run the playbook, use the following command:

initramfs

What is initramfs ?

initramfs stands for initial RAM filesystem. It plays a crucial role in the Linux boot process by providing a temporary root filesystem that is loaded into memory. This temporary root filesystem contains the necessary drivers, tools, and scripts needed to mount the real root filesystem and continue the boot process.

In simpler terms, it acts as a bridge between the bootloader and the main operating system, ensuring that the system has everything it needs to boot successfully.

Key Concepts of initramfs:

FeatureDescription
Temporary FilesystemIt’s loaded into memory as a temporary root filesystem.
Kernel ModulesContains drivers (kernel modules) required to access disks, filesystems, and other hardware.
ScriptsContains initialization scripts to prepare the system for booting the real root filesystem.
Critical FilesIncludes essential tools like mount, udev, bash, and libraries.

Key Functions of initramfs:

  1. Kernel Initialization: During the boot process, the Linux kernel loads the initramfs into memory.
  2. Loading Drivers: initramfs includes essential drivers needed to access hardware components, such as storage devices and filesystems.
  3. Mounting Root Filesystem: The primary function of initramfs is to mount the real root filesystem from a storage device (e.g., hard drive, SSD).
  4. Transitioning to Real Root: Once the real root filesystem is mounted, the initramfs transitions control to the system’s main init process, allowing the boot process to continue.

How initramfs Works:

  1. Bootloader Stage: The bootloader (e.g., GRUB) loads the Linux kernel and initramfs into memory.
  2. Kernel Stage: The kernel initializes and mounts the initramfs as the root filesystem.
  3. Init Stage: The init script or program within initramfs runs, performing tasks such as loading additional drivers, mounting filesystems, and locating the real root filesystem.
  4. Switch Root: The initramfs mounts the real root filesystem and switches control to it, allowing the system to boot normally.

Customizing initramfs:

You can customize the initramfs by including specific drivers, tools, and scripts. This is useful for scenarios where the default initramfs does not include the necessary components for your system.

Tools for Managing initramfs:

  • mkinitramfs: A tool to create initramfs images.
  • update-initramfs: A tool to update existing initramfs images.

Difference Between initramfs and initrd

Featureinitramfsinitrd
Formatcpio archive (compressed)Disk image (block device)
MountingExtracted directly into RAM as a rootfsMounted as a loop device
FlexibilityMore flexible and fasterLess flexible, older technology

Location of initramfs

On most Linux distributions, the initramfs file is located in the /boot directory:

ls /boot/initramfs-*.img

How to Rebuild initramfs

If you’ve made changes to the kernel, /etc/fstab, or storage configuration (e.g., LUKS, LVM), you may need to rebuild the initramfs.

Rebuild initramfs on RHEL/CentOS:

sudo dracut -f

Rebuild initramfs on Ubuntu/Debian:

sudo update-initramfs -u

Common Issues Related to initramfs

IssueCauseSolution
Dropped into initramfs shellKernel can’t find the root filesystemCheck /etc/fstab, rebuild initramfs, or fix missing drivers.
Boot failure after kernel updateMissing or corrupt initramfsRebuild initramfs.
Filesystem not mountingIncorrect or missing drivers in initramfsEnsure necessary drivers are included and rebuild.

By understanding how initramfs works, you can better appreciate its role in the Linux boot process and customize it to suit your needs.

Upgrade RHEL 7 to RHEL 8 – ansible playbook

---
- name: Upgrade RHEL 7 to RHEL 8
  hosts: all
  become: true
  vars:
    leapp_repo: "rhel-7-server-extras-rpms"

  tasks:
    - name: Ensure system is up to date
      yum:
        name: '*'
        state: latest

    - name: Install the Leapp utility
      yum:
        name: leapp
        state: present
        enablerepo: "{{ leapp_repo }}"

    - name: Run the pre-upgrade check with Leapp
      shell: leapp preupgrade
      register: leapp_preupgrade
      ignore_errors: true

    - name: Check for pre-upgrade issues
      debug:
        msg: "Leapp pre-upgrade check output: {{ leapp_preupgrade.stdout }}"

    - name: Fix any issues identified by Leapp (manual step)
      pause:
        prompt: "Please review the pre-upgrade report at /var/log/leapp/leapp-report.txt and fix any blocking issues. Press Enter to continue."

    - name: Proceed with the upgrade
      shell: leapp upgrade
      ignore_errors: true

    - name: Reboot the server to complete the upgrade
      reboot:
        reboot_timeout: 1200

    - name: Verify the OS version after reboot
      shell: cat /etc/redhat-release
      register: os_version

    - name: Display the new OS version
      debug:
        msg: "The server has been successfully upgraded to: {{ os_version.stdout }}"

How to use LUKS data disk encryption in MapR

How to use LUKS data disk encryption in MapR

MapR (now part of HPE Ezmeral) supports encryption at various levels, but using LUKS (Linux Unified Key Setup) encryption for data disks is a system-level operation that must be done outside of MapR’s native encryption features. Here’s a step-by-step guide to set up LUKS disk encryption on a MapR node and ensure MapR can access the encrypted disk after unlocking it.


Steps to Set Up LUKS Disk Encryption for MapR Data Disks

1. Identify the Disk to Encrypt

Find the disk you want to encrypt using the lsblk or fdisk command:

bash

lsblk

fdisk -l

For example, if the disk to be encrypted is /dev/sdb, use that in the following steps.


2. Install the Necessary Packages

Ensure you have the required tools to set up LUKS encryption:

sudo apt-get install cryptsetup  # For Ubuntu/Debian

sudo yum install cryptsetup      # For CentOS/RHEL


3. Set Up LUKS Encryption on the Disk

Run the following command to initialize the disk with LUKS encryption:

sudo cryptsetup luksFormat /dev/sdb

You’ll be prompted to confirm the operation and set a passphrase.

⚠️ Warning: This will erase all existing data on the disk.


4. Open and Map the Encrypted Disk

Unlock the encrypted disk and map it to a device:

sudo cryptsetup open /dev/sdb mapr_data_disk

You can verify that the encrypted device is available:

lsblk


5. Format the Encrypted Disk

Format the newly mapped device with a filesystem that MapR supports (typically ext4 or xfs):

sudo mkfs.ext4 /dev/mapper/mapr_data_disk


6. Mount the Encrypted Disk

Create a mount point and mount the encrypted disk:

sudo mkdir -p /opt/mapr/data

sudo mount /dev/mapper/mapr_data_disk /opt/mapr/data


7. Make the Mount Persistent

Edit the /etc/crypttab file to automatically unlock the disk at boot:

echo “mapr_data_disk /dev/sdb none luks” | sudo tee -a /etc/crypttab

Then, add an entry to /etc/fstab to mount the disk automatically after it is unlocked:

echo “/dev/mapper/mapr_data_disk /opt/mapr/data ext4 defaults 0 0” | sudo tee -a /etc/fstab


8. Ensure MapR Can Access the Disk

Make sure the MapR user has the necessary permissions to access the encrypted disk:

sudo chown -R mapr:mapr /opt/mapr/data


9. Test the Setup

Reboot the system to ensure the encrypted disk is unlocked and mounted correctly:

sudo reboot

After the system reboots, verify that the disk is unlocked and mounted:

lsblk

df -h


10. Verify MapR Storage Pools

After the encrypted disk is mounted, add it to the MapR storage pool:

maprcli disk add -server <server_name> -disks /dev/mapper/mapr_data_disk


Additional Considerations

  • Passphrase Management: Consider integrating with a key management system (KMS) to avoid manual passphrase entry.
  • Performance Impact: Encryption may introduce some performance overhead, so test accordingly.
  • Backup Configuration Files: Ensure you back up /etc/crypttab and /etc/fstab for disaster recovery.

Kong – add custom plugins (ansible playbook)

---
- name: Deploy and enable a custom plugin in Kong
  hosts: kong_servers
  become: yes
  vars:
    plugin_name: "my_custom_plugin"
    plugin_source_path: "/path/to/local/plugin" # Local path to the plugin code
    kong_plugin_dir: "/usr/local/share/lua/5.1/kong/plugins" # Default Kong plugin directory
  tasks:

    - name: Ensure Kong plugin directory exists
      file:
        path: "{{ kong_plugin_dir }}/{{ plugin_name }}"
        state: directory
        mode: '0755'

    - name: Copy plugin files to Kong plugin directory
      copy:
        src: "{{ plugin_source_path }}/"
        dest: "{{ kong_plugin_dir }}/{{ plugin_name }}/"
        mode: '0644'

    - name: Verify plugin files were copied
      shell: ls -la "{{ kong_plugin_dir }}/{{ plugin_name }}"
      register: verify_plugin_copy
    - debug:
        var: verify_plugin_copy.stdout

    - name: Update Kong configuration to include the custom plugin
      lineinfile:
        path: "/etc/kong/kong.conf"
        regexp: "^plugins ="
        line: "plugins = bundled,{{ plugin_name }}"
        state: present
      notify: restart kong

    - name: Verify the plugin is enabled
      shell: kong config parse /etc/kong/kong.conf
      register: config_check
    - debug:
        var: config_check.stdout

  handlers:
    - name: restart kong
      service:
        name: kong
        state: restarted