---
- name: Deploy and enable a custom plugin in Kong
hosts: kong_servers
become: yes
vars:
plugin_name: "my_custom_plugin"
plugin_source_path: "/path/to/local/plugin" # Local path to the plugin code
kong_plugin_dir: "/usr/local/share/lua/5.1/kong/plugins" # Default Kong plugin directory
tasks:
- name: Ensure Kong plugin directory exists
file:
path: "{{ kong_plugin_dir }}/{{ plugin_name }}"
state: directory
mode: '0755'
- name: Copy plugin files to Kong plugin directory
copy:
src: "{{ plugin_source_path }}/"
dest: "{{ kong_plugin_dir }}/{{ plugin_name }}/"
mode: '0644'
- name: Verify plugin files were copied
shell: ls -la "{{ kong_plugin_dir }}/{{ plugin_name }}"
register: verify_plugin_copy
- debug:
var: verify_plugin_copy.stdout
- name: Update Kong configuration to include the custom plugin
lineinfile:
path: "/etc/kong/kong.conf"
regexp: "^plugins ="
line: "plugins = bundled,{{ plugin_name }}"
state: present
notify: restart kong
- name: Verify the plugin is enabled
shell: kong config parse /etc/kong/kong.conf
register: config_check
- debug:
var: config_check.stdout
handlers:
- name: restart kong
service:
name: kong
state: restarted
Month: December 2024
LUKS – disks encrypt
#!/bin/bash
# Variables
DISKS=("/dev/sdb" "/dev/sdc") # List of disks to encrypt
KEYFILE="/etc/luks/keyfile" # Keyfile path
MOUNT_POINTS=("/mnt/disk1" "/mnt/disk2") # Corresponding mount points
# Check for root privileges
if [ "$(id -u)" -ne 0 ]; then
echo "This script must be run as root. Exiting."
exit 1
fi
# Create the keyfile if it doesn't exist
if [ ! -f "$KEYFILE" ]; then
echo "Creating LUKS keyfile..."
mkdir -p "$(dirname "$KEYFILE")"
dd if=/dev/urandom of="$KEYFILE" bs=4096 count=1
chmod 600 "$KEYFILE"
fi
# Function to encrypt and set up a disk
encrypt_disk() {
local DISK=$1
local MAPPER_NAME=$2
local MOUNT_POINT=$3
echo "Processing $DISK..."
# Check if the disk is already encrypted
if cryptsetup isLuks "$DISK"; then
echo "$DISK is already encrypted. Skipping."
return
fi
# Format the disk with LUKS encryption
echo "Encrypting $DISK..."
cryptsetup luksFormat "$DISK" "$KEYFILE"
if [ $? -ne 0 ]; then
echo "Failed to encrypt $DISK. Exiting."
exit 1
fi
# Open the encrypted disk
echo "Opening $DISK..."
cryptsetup luksOpen "$DISK" "$MAPPER_NAME" --key-file "$KEYFILE"
# Create a filesystem on the encrypted disk
echo "Creating filesystem on /dev/mapper/$MAPPER_NAME..."
mkfs.ext4 "/dev/mapper/$MAPPER_NAME"
# Create the mount point if it doesn't exist
mkdir -p "$MOUNT_POINT"
# Add entry to /etc/fstab for automatic mounting
echo "Adding $DISK to /etc/fstab..."
UUID=$(blkid -s UUID -o value "/dev/mapper/$MAPPER_NAME")
echo "UUID=$UUID $MOUNT_POINT ext4 defaults 0 2" >> /etc/fstab
# Mount the disk
echo "Mounting $MOUNT_POINT..."
mount "$MOUNT_POINT"
}
# Loop through disks and encrypt each one
for i in "${!DISKS[@]}"; do
DISK="${DISKS[$i]}"
MAPPER_NAME="luks_disk_$i"
MOUNT_POINT="${MOUNT_POINTS[$i]}"
encrypt_disk "$DISK" "$MAPPER_NAME" "$MOUNT_POINT"
done
echo "All disks have been encrypted and mounted."
EKS – subnet size
To determine the appropriate subnet class for an Amazon EKS (Elastic Kubernetes Service) cluster with 5 nodes, it’s important to account for both the nodes and the additional IP addresses needed for pods and other resources. Here’s a recommended approach:
Calculation and Considerations:
- EKS Node IP Addresses:
- Each node will need its own IP address.
- For 5 nodes, that’s 5 IP addresses.
- Pod IP Addresses:
- By default, the Amazon VPC CNI plugin assigns one IP address per pod from the node’s subnet.
- The number of pods per node depends on your instance type and the configuration of your Kubernetes cluster.
- For example, if you expect each node to host around 20 pods, you’ll need approximately 100 IP addresses for pods.
- Additional Resources:
- Include IP addresses for other resources like load balancers, services, etc.
Subnet Size Recommendation:
A /24 subnet provides 254 usable IP addresses, which is typically sufficient for a small EKS cluster with 5 nodes.
Example Calculation:
- Nodes: 5 IP addresses
- Pods: 100 IP addresses (assuming 20 pods per node)
- Additional Resources: 10 IP addresses (for services, load balancers, etc.)
Total IP Addresses Needed: 5 (nodes) + 100 (pods) + 10 (resources) = 115 IP addresses.
Recommended Subnet Size:
A /24 subnet should be sufficient for this setup:
- CIDR Notation: 192.168.0.0/24
- Total IP Addresses: 256
- Usable IP Addresses: 254
Example Configuration:
- Subnet 1: 192.168.0.0/24
Reasons to Choose a Bigger Subnet (e.g., /22 or /20):
- Future Scalability: If you anticipate significant growth in the number of nodes or pods, a larger subnet will provide ample IP addresses for future expansion without the need to reconfigure your network.
- Flexibility: More IP addresses give you flexibility to add additional resources such as load balancers, services, or new applications.
- Avoiding Exhaustion: Ensuring you have a large pool of IP addresses can prevent issues related to IP address exhaustion, which can disrupt your cluster’s operations.
Example Subnet Sizes:
- /22 Subnet:
- Total IP Addresses: 1,024
- Usable IP Addresses: 1,022
- /20 Subnet:
- Total IP Addresses: 4,096
- Usable IP Addresses: 4,094
When to Consider Smaller Subnets (e.g., /24):
- Small Deployments: If your EKS cluster is small and you do not expect significant growth, a /24 subnet might be sufficient.
- Cost Efficiency: Smaller subnets can sometimes be more cost-effective in environments where IP address scarcity is not a concern.
For an EKS cluster with 5 nodes, I would recommend going with a /22 subnet. This gives you a healthy margin of IP addresses for your nodes, pods, and additional resources while providing room for future growth.
Kong – add admin user
To add an admin to Kong Manager via the Admin API, you’ll need to follow these steps:
# Disable RBAC
curl -X PUT http://localhost:8001/rbac/enable --data '{"enabled": false}'
# Create a new admin user
curl -X POST http://localhost:8001/admins \
--header "Content-Type: application/json" \
--data '{
"username": "newadmin",
"email": "newadmin@example.com",
"password": "yourpassword"
}'
# Assign roles to the new admin
curl -X POST http://localhost:8001/admins/newadmin/roles \
--header "Content-Type: application/json" \
--data '{
"roles": ["super-admin"]
}'
# Enable RBAC
curl -X PUT http://localhost:8001/rbac/enable --data '{"enabled": true}'
Encrypt multiple disks with LUKS
---
- name: Encrypt multiple disks with LUKS
hosts: all
become: yes
vars:
luks_disks: # List of disks to encrypt
- /dev/sdb
- /dev/sdc
luks_password: secret_password # Replace or use a vault/encrypted variable
mount_points: # List of mount points corresponding to the disks
- /mnt/disk1
- /mnt/disk2
tasks:
- name: Ensure required packages are installed
ansible.builtin.yum:
name:
- cryptsetup
state: present
- name: Create LUKS encryption on disks
ansible.builtin.command:
cmd: "echo {{ luks_password }} | cryptsetup luksFormat {{ item }} -q"
loop: "{{ luks_disks }}"
ignore_errors: no
- name: Open LUKS-encrypted disks
ansible.builtin.command:
cmd: "echo {{ luks_password }} | cryptsetup luksOpen {{ item }} luks_{{ item | regex_replace('/dev/', '') }}"
loop: "{{ luks_disks }}"
- name: Format the LUKS-encrypted devices with ext4 filesystem
ansible.builtin.command:
cmd: "mkfs.ext4 /dev/mapper/luks_{{ item | regex_replace('/dev/', '') }}"
loop: "{{ luks_disks }}"
- name: Create mount points
ansible.builtin.file:
path: "{{ item }}"
state: directory
loop: "{{ mount_points }}"
- name: Mount the LUKS devices to mount points
ansible.builtin.mount:
path: "{{ item.1 }}"
src: "/dev/mapper/luks_{{ item.0 | regex_replace('/dev/', '') }}"
fstype: ext4
state: mounted
loop: "{{ luks_disks | zip(mount_points) | list }}"
- name: Add entries to /etc/crypttab
ansible.builtin.lineinfile:
path: /etc/crypttab
line: "luks_{{ item | regex_replace('/dev/', '') }} {{ item }} none luks"
loop: "{{ luks_disks }}"
create: yes
- name: Add entries to /etc/fstab
ansible.builtin.lineinfile:
path: /etc/fstab
line: "/dev/mapper/luks_{{ item.0 | regex_replace('/dev/', '') }} {{ item.1 }} ext4 defaults 0 0"
loop: "{{ luks_disks | zip(mount_points) | list }}"
create: yes
a
## output
Processing /dev/sdc...
Encrypting /dev/sdc...
WARNING!
========
This will overwrite data on /dev/sdc irrevocably.
Are you sure? (Type 'yes' in capital letters): YES
Opening /dev/sdc...
Device luks_disk_0 already exists.
Creating filesystem on /dev/mapper/luks_disk_0...
mke2fs 1.46.5 (30-Dec-2021)
/dev/mapper/luks_disk_0 is mounted; will not make a filesystem here!
Adding /dev/sdc to /etc/fstab...
Mounting /mnt/disk2...
mount: (hint) your fstab has been modified, but systemd still uses
the old version; use 'systemctl daemon-reload' to reload.
Processing /dev/sdd...
Encrypting /dev/sdd...
WARNING!
========
This will overwrite data on /dev/sdd irrevocably.
Are you sure? (Type 'yes' in capital letters): YES
Opening /dev/sdd...
Creating filesystem on /dev/mapper/luks_disk_1...
mke2fs 1.46.5 (30-Dec-2021)
Creating filesystem with 2617344 4k blocks and 655360 inodes
Filesystem UUID: d0bb5504-abf9-4e00-8670-59d8fa92b883
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
Adding /dev/sdd to /etc/fstab...
Mounting /mnt/disk3...
mount: (hint) your fstab has been modified, but systemd still uses
the old version; use 'systemctl daemon-reload' to reload.
All disks have been encrypted and mounted.
Ansible playbook to automate the installation of HPE Ezmeral Data Fabric
Below is an Ansible playbook to automate the installation of HPE Ezmeral Data Fabric (formerly MapR) on a Linux cluster. This playbook assumes you have at least a basic understanding of Ansible and have set up your inventory file for your Linux cluster.
Prerequisites
- Install Ansible on your control node.
- Ensure passwordless SSH access from the control node to all cluster nodes.
- Prepare an Ansible inventory file listing all the nodes in the cluster.
- Download the HPE Ezmeral installation packages and place them in a shared or accessible location.
Inventory File (inventory.yml)
Define your 12-node cluster in the Ansible inventory file:
all:
hosts:
node1:
ansible_host: 192.168.1.101
node2:
ansible_host: 192.168.1.102
node3:
ansible_host: 192.168.1.103
# Add all 12 nodes
vars:
ansible_user: your-ssh-user
ansible_ssh_private_key_file: /path/to/your/private/key
java_package: java-11-openjdk-devel
ezmeral_packages:
- mapr-core
- mapr-fileserver
- mapr-cldb
- mapr-webserver
- mapr-zookeeper
Ansible Playbook (install_hpe_ezmeral.yml)
The playbook automates Java installation, package deployment, service configuration, and starting services.
—
– name: Install HPE Ezmeral Data Fabric on a 12-node cluster
hosts: all
become: yes
vars:
ezmeral_repo_url: “http://repo.mapr.com/releases/v7.0/redhat”
ezmeral_repo_file: “/etc/yum.repos.d/mapr.repo”
tasks:
– name: Install dependencies
yum:
name: “{{ item }}”
state: present
loop:
– epel-release
– “{{ java_package }}”
– wget
– lsof
– name: Configure HPE Ezmeral repository
copy:
dest: “{{ ezmeral_repo_file }}”
content: |
[mapr_repo]
name=MapR Repository
baseurl={{ ezmeral_repo_url }}
gpgcheck=0
enabled=1
– name: Install HPE Ezmeral packages
yum:
name: “{{ item }}”
state: present
loop: “{{ ezmeral_packages }}”
– name: Configure ZooKeeper on specific nodes
block:
– name: Create ZooKeeper data directory
file:
path: /var/mapr/zookeeper
state: directory
owner: mapr
group: mapr
mode: 0755
– name: Set up ZooKeeper myid file
copy:
dest: /var/mapr/zookeeper/myid
content: “{{ item }}”
owner: mapr
group: mapr
mode: 0644
when: inventory_hostname in [‘node1’, ‘node2’, ‘node3’]
loop:
– 1
– 2
– 3
– name: Configure cluster-wide settings
shell: |
/opt/mapr/server/configure.sh -C node1,node2,node3 -Z node1,node2,node3
– name: Start required services
service:
name: “{{ item }}”
state: started
enabled: yes
loop:
– mapr-zookeeper
– mapr-cldb
– mapr-fileserver
– mapr-webserver
– mapr-nodemanager
– name: Verify installation
shell: maprcli node list -columns svc
register: verification_output
– name: Output verification results
debug:
var: verification_output.stdout
Execution Steps
- Prepare the environment:
- Place the inventory.yml and install_hpe_ezmeral.yml files in the same directory.
- Run the playbook:
ansible-playbook -i inventory.yml install_hpe_ezmeral.yml
- Verify the installation:
- Access the HPE Ezmeral Data Fabric WebUI:
http://<CLDB-node-IP>:8443
- Run test commands to ensure the cluster is operational:
hadoop fs -mkdir /test
hadoop fs -ls /
This playbook sets up the cluster for basic functionality. You can expand it to include advanced configurations such as Kerberos integration, SSL/TLS setup, or custom disk partitioning.
Steps to install HPE Ezmeral 7.x on Linux cluster
Installing HPE Ezmeral Data Fabric (formerly MapR) version 7.x on a 12-node Linux cluster requires planning and configuration. Here are the detailed steps to install and configure the cluster:
Step 1: Prerequisites
- System Requirements:
- 64-bit Linux (RHEL/CentOS 7 or 8, or equivalent).
- Minimum hardware for each node:
- Memory: At least 16GB RAM.
- CPU: Quad-core or higher.
- Disk: Minimum of 500GB of storage.
- Network Configuration:
- Assign static IP addresses or hostnames to all 12 nodes.
- Configure DNS or update /etc/hosts with the IP and hostname mappings.
- Ensure nodes can communicate with each other via SSH.
- Users and Permissions:
- Create a dedicated user for HPE Ezmeral (e.g., mapr).
- Grant the user passwordless SSH access across all nodes.
- Firewall and SELinux:
- Disable or configure the firewall to allow required ports.
- Set SELinux to permissive mode:
sudo setenforce 0
sudo sed -i ‘s/^SELINUX=.*/SELINUX=permissive/’ /etc/selinux/config
- Java Installation:
- Install Java (OpenJDK 11 recommended):
sudo yum install java-11-openjdk -y
Step 2: Download HPE Ezmeral Data Fabric Software
- Obtain Software:
- Download the HPE Ezmeral 7.x installation packages from the official HPE Ezmeral website.
- Distribute Packages:
- Copy the packages to all 12 nodes using scp or a similar tool.
Step 3: Install Core Services
- Install the Core Packages:
- On each node, install the required packages:
sudo yum install mapr-core mapr-fileserver mapr-cldb mapr-webserver -y
- Install Additional Services:
- Based on your use case, install additional packages (e.g., mapr-zookeeper, mapr-nodemanager, etc.).
Step 4: Configure ZooKeeper
- Select ZooKeeper Nodes:
- Choose three nodes to run the ZooKeeper service (e.g., node1, node2, node3).
- Edit the ZooKeeper Configuration:
- Update the ZooKeeper configuration file (/opt/mapr/zookeeper/zookeeper-<version>/conf/zoo.cfg) on the ZooKeeper nodes:
tickTime=2000
dataDir=/var/mapr/zookeeper
clientPort=2181
initLimit=5
syncLimit=2
server.1=node1:2888:3888
server.2=node2:2888:3888
server.3=node3:2888:3888
- Initialize ZooKeeper:
- On each ZooKeeper node, create a myid file:
echo “1” > /var/mapr/zookeeper/myid # Replace with 2 or 3 for other nodes
- Start ZooKeeper:
sudo systemctl start mapr-zookeeper
Step 5: Configure the Cluster
- Initialize the Cluster:
- Run the cluster initialization command from one node:
/opt/mapr/server/configure.sh -C node1,node2,node3 -Z node1,node2,node3
- Replace node1,node2,node3 with the actual hostnames of the CLDB and ZooKeeper nodes.
- Verify Installation:
- Check the cluster status:
maprcli cluster info
- Add Nodes to the Cluster:
- On each additional node, configure it to join the cluster:
/opt/mapr/server/configure.sh -N <cluster_name> -C node1,node2,node3 -Z node1,node2,node3
Step 6: Start Core Services
- Start CLDB:
- Start the CLDB service on the designated nodes:
sudo systemctl start mapr-cldb
- Start FileServer and WebServer:
- Start the file server and web server services on all nodes:
sudo systemctl start mapr-fileserver
sudo systemctl start mapr-webserver
- Start Node Manager:
- If using YARN, start the Node Manager service on all nodes:
sudo systemctl start mapr-nodemanager
Step 7: Post-Installation Steps
- Access the Web Interface:
- Open a browser and go to the web interface of your cluster:
http://<CLDB-node-IP>:8443
- Log in using the mapr user credentials.
- Add Storage:
- Add storage disks to the cluster using the web interface or CLI:
maprcli disk list
maprcli disk add -all
- Secure the Cluster:
- Enable Kerberos or LDAP integration for authentication.
- Configure SSL/TLS for secure communication.
Step 8: Verify the Cluster
- Check Services:
- Verify that all services are running:
maprcli node list -columns svc
- Run a Test:
- Create a directory in the file system and test file creation:
hadoop fs -mkdir /test
hadoop fs -put /path/to/local/file /test
How to generate a ticket in MapR – HPE Ezmeral
How to generate a ticket in MapR
To generate a MapR user ticket, you can use the maprlogin command. Here’s a step-by-step guide:
Steps to Generate a MapR User Ticket
- Open Terminal: Open your terminal window.
- Run the Command: Use the maprlogin password command to generate a user ticket. This command will prompt you for the user’s password1.
maprlogin password
For example:
maprlogin password [Password for user ‘yourusername’ at cluster ‘your.cluster.com’: ]
- Generate the Ticket: The command will generate a ticket file and store it in the /tmp directory by default. The ticket file will be named maprticket_<UID>.
Example
Let’s say you want to generate a ticket for the user juser on the cluster my.cluster.com:
maprlogin password [Password for user ‘juser’ at cluster ‘my.cluster.com’: ]
MapR credentials of user ‘juser’ for cluster ‘my.cluster.com’ are written to ‘/tmp/maprticket_1000’
Verify the Ticket
To verify the ticket, you can use the maprlogin print command:
maprlogin print
This command will display the ticket details, including the user, creation time, expiration time, and renewal information.
Renewing ticket
To renew a MapR user ticket, you can use the maprlogin command with the -renewal option. Here’s how you can do it:
Steps to Renew a MapR User Ticket
- Open Terminal: Open your terminal window.
- Generate a New Ticket: Use the maprlogin command with the -renewal option to renew the ticket. You’ll need to specify the duration for the renewed ticket1.
maprlogin password -renewal <duration>
Replace <duration> with the desired duration for the renewed ticket (e.g., 30:0:0 for 30 days).
Example
Let’s say you want to renew the ticket for 30 days:
maprlogin password -renewal 30:0:0
Verify the Renewed Ticket
To verify that the ticket has been renewed, you can use the maprlogin print command:
maprlogin print
This command will display the ticket details, including the new expiration date.
Managing Tickets
Managing MapR tickets involves creating, renewing, and revoking user tickets that are required for authentication and authorization in a MapR cluster. Here are the key aspects of ticket management:
1. Generating a Ticket
- Create a User Ticket: Use the maprlogin command to generate a ticket:
maprlogin password
This will prompt you to enter the user’s password and generate a ticket file.
2. Viewing Ticket Information
- Check Ticket Details: Use the maprlogin print command to display the current ticket details:
maprlogin print
This shows the user, creation time, expiration time, and other details of the ticket.
3. Renewing a Ticket
- Renew the Ticket: If your ticket is about to expire, you can renew it using:
maprlogin password -renewal <duration>
Replace <duration> with the desired duration for the renewed ticket (e.g., 30:0:0 for 30 days).
4. Revoking a Ticket
- Revoke a Ticket: To revoke a ticket, you can use the maprcli command:
maprcli session delete -type service -user <username>
Replace <username> with the name of the user whose ticket you want to revoke.
5. Managing Ticket Expiration
- Set Ticket Lifetime: You can set the lifetime of a ticket using the maprcli config save command:
maprcli config save -values { “ticket.lifetime” : “24:0:0” }
This sets the default ticket lifetime to 24 hours.
6. Checking Ticket Validity
- Validate Ticket: To check if a ticket is still valid, you can use the maprlogin command:
maprlogin info
This command provides information on the validity and expiration of the ticket.
Best Practices
- Regular Renewal: Ensure tickets are renewed regularly to avoid authentication issues.
- Monitor Expiration: Keep track of ticket expiration times and set reminders if necessary.
- Secure Storage: Store ticket files securely and restrict access to authorized users only.
- Use Service Tickets: For applications and services, use service tickets that have appropriate lifetimes and permissions.
By following these steps and best practices, you can effectively manage MapR user tickets and ensure smooth operation of your MapR cluster.