Step to install HPE Ezmeral Data Fabric (formerly MapR) 7.x cluster on Linux

Contents

1. Pre-Installation Requirements. 1

2. Download and Configure HPE Ezmeral Repositories. 1

3. Install Core Data Fabric Packages. 2

4. Configure ZooKeeper and CLDB. 2

5. Cluster Initialization. 3

6. Verify Cluster Status. 3

7. Additional Configuration (Optional) 4

8. Test the Cluster. 4

9. Set Up Monitoring and Logging. 5

Step to install HPE Ezmeral Data Fabric (formerly MapR) 7.x cluste on linux

Setting up an HPE Ezmeral Data Fabric (formerly MapR) 7.x cluster on Linux involves several steps, including environment preparation, software installation, and cluster configuration. Here’s a detailed guide to install and configure a basic Ezmeral Data Fabric 7.x cluster on Linux:


1. Pre-Installation Requirements

  • Operating System: Ensure your Linux distribution is compatible. HPE Ezmeral 7.x supports various versions of RHEL, CentOS, and Ubuntu. Check the official compatibility matrix for version specifics.
  • Hardware Requirements: Verify that your hardware meets the minimum requirements:
    • CPU: At least 4 cores per node (adjust based on workload).
    • Memory: Minimum of 8 GB RAM (16 GB recommended).
    • Storage: SSD or high-performance disks for data storage; adequate storage space for data and logs.
  • Network: Ensure all cluster nodes can communicate over the network. Set up DNS or /etc/hosts entries so nodes can resolve each other by hostname.
  • Permissions: You will need root or sudo privileges on each node.

2. Download and Configure HPE Ezmeral Repositories

  • Add Repository and GPG Key: Set up the HPE Ezmeral Data Fabric repository on each node by adding the appropriate repository file and importing the GPG key.
    • For RHEL/CentOS:

sudo tee /etc/yum.repos.d/ezmeral-data-fabric.repo <<EOF

[maprtech]

name=MapR Technologies

baseurl=http://package.mapr.com/releases/v7.0.0/redhat/

enabled=1

gpgcheck=1

gpgkey=http://package.mapr.com/releases/pub/maprgpg.key

EOF

sudo rpm –import http://package.mapr.com/releases/pub/maprgpg.key

  • Update Package Manager:

 CentOS/RHEL: sudo yum update


3. Install Core Data Fabric Packages

  • Install Core Packages:
    • Install essential packages, including core components, CLDB, and webserver.

# For CentOS/RHEL

sudo yum install mapr-core mapr-cldb mapr-fileserver mapr-zookeeper mapr-webserver

Install Additional Services:

-Based on your needs, install additional services like MapR NFS, Resource Manager, or YARN.

sudo yum install mapr-nfs mapr-resourcemanager mapr-nodemanager


4. Configure ZooKeeper and CLDB

  • ZooKeeper Configuration:
    • Identify nodes to act as ZooKeeper servers (recommended at least 3 for high availability).
    • Add each ZooKeeper node to /opt/mapr/zookeeper/zookeeper-3.x.x/conf/zoo.cfg:

server.1=<zk1_hostname>:2888:3888

server.2=<zk2_hostname>:2888:3888

server.3=<zk3_hostname>:2888:3888

  • Start ZooKeeper on each ZooKeeper node:

sudo systemctl start mapr-zookeeper

  • CLDB Configuration:
    • Specify the nodes that will run the CLDB service.
    • Edit /opt/mapr/conf/cldb.conf and add the IPs or hostnames of the CLDB nodes:

cldb.zookeeper.servers=<zk1_hostname>:5181,<zk2_hostname>:5181,<zk3_hostname>:5181


5. Cluster Initialization

  • Set Up the MapR License:
    • Copy the HPE Ezmeral Data Fabric license file to /opt/mapr/conf/mapr.license on the CLDB node.
  • Run Cluster Installer:
    • Use the configure.sh script to initialize the cluster. Run this script on each node:

sudo /opt/mapr/server/configure.sh -C <cldb1_ip>:7222,<cldb2_ip>:7222 -Z <zk1_hostname>,<zk2_hostname>,<zk3_hostname>

  • The -C flag specifies the CLDB nodes, and -Z specifies the ZooKeeper nodes.
  • Start Warden Services:
    • On each node, start the mapr-warden service to initiate the core services:

sudo systemctl start mapr-warden


6. Verify Cluster Status

  • MapR Control System (MCS):
    • Access the MCS web UI to monitor the cluster. Open https://<cldb_node_ip&gt;:8443 in a browser.
    • Log in with the default credentials and verify the health and status of the cluster components.
  • CLI Verification:
    • Run the following command on the CLDB node to check cluster status:

maprcli node list -columns hostname,ip

  • Check the status of services using:

maprcli service list


7. Additional Configuration (Optional)

  • NFS Gateway Setup:
    • Install and configure the MapR NFS gateway to expose cluster data as NFS shares.

sudo yum install mapr-nfs

sudo systemctl start mapr-nfs

  • High Availability (HA) Setup:
    • For high availability, consider adding redundant nodes for critical services (CLDB, ZooKeeper) and configuring failover settings.
  • Security Configuration:
    • Set up user roles and permissions using the maprcli command and configure Kerberos or TLS for secure authentication if needed.

8. Test the Cluster

  • Data Operations: Use the following commands to test basic operations:

# Create a new directory in the data fabric

hadoop fs -mkdir /test_directory

# Copy a file into the data fabric

hadoop fs -copyFromLocal localfile.txt /test_directory

# List files in the directory

hadoop fs -ls /test_directory

  • Service Health Check: Use the MCS or maprcli commands to ensure all services are running as expected.

9. Set Up Monitoring and Logging

  • MapR Monitoring:
    • Set up logging and monitoring for long-term maintenance. Configure mapr-metrics or integrate with external monitoring tools (e.g., Prometheus).
  • Backup and Recovery:
    • Enable volume snapshots and set up periodic backups for critical data.

Following these steps will give you a functional HPE Ezmeral Data Fabric 7.x cluster on Linux, ready for production workloads. Customize configurations based on your specific needs, especially around security, high availability, and resource allocation to get optimal performance from your environment.

Disk encryption

In HPE Ezmeral Data Fabric (formerly MapR), disk encryption (not just volume-level encryption) can provide added security by encrypting the entire storage disk at a low level, ensuring that data is protected as it is written to and read from physical storage. This approach is commonly implemented using Linux-based disk encryption tools on the underlying operating system, as HPE Ezmeral does not natively provide disk encryption functionality.

Steps to Set Up Disk Encryption for HPE Ezmeral Data Fabric on Linux

To encrypt disks at the OS level, use encryption tools like dm-crypt/LUKS (Linux Unified Key Setup), which is widely supported, integrates well with Linux, and offers flexibility for encrypting storage disks used by HPE Ezmeral Data Fabric.

1. Prerequisites

  • Linux system with root access where HPE Ezmeral Data Fabric is installed.
  • Unformatted disk(s) or partitions that you plan to use for HPE Ezmeral storage.
  • Backup any important data, as disk encryption setups typically require formatting the disk.

2. Install Required Packages

Ensure cryptsetup is installed, as it provides the tools necessary for LUKS encryption.

sudo apt-get install cryptsetup   # For Debian/Ubuntu systems

sudo yum install cryptsetup       # For CentOS/RHEL systems

3. Encrypt the Disk with LUKS

  1. Set Up LUKS Encryption on the Disk:
    • Choose the target disk (e.g., /dev/sdb), and initialize it with LUKS encryption. This command will erase all data on the disk.

sudo cryptsetup luksFormat /dev/sdb

  1. Open and Map the Encrypted Disk:
    • Unlock the encrypted disk and assign it a name (e.g., encrypted_data).

sudo cryptsetup luksOpen /dev/sdb encrypted_data

  1. Format the Encrypted Disk:
    • Create a file system (such as ext4) on the encrypted disk mapping.

sudo mkfs.ext4 /dev/mapper/encrypted_data

  1. Mount the Encrypted Disk:
    • Create a mount point for the encrypted storage, and then mount it.

sudo mkdir -p /datafabric

sudo mount /dev/mapper/encrypted_data /datafabric

  1. Configure Automatic Unlocking on Reboot (Optional):
    • To automate unlocking on system boot, you can store the passphrase in a secure location or use a network-based key server, but this may affect security.
    • Alternatively, you can manually unlock the disk after each reboot using cryptsetup luksOpen.

4. Update HPE Ezmeral to Use the Encrypted Disk

  1. Update HPE Ezmeral Configuration:
    • Point HPE Ezmeral Data Fabric’s configuration to use the new encrypted mount point (/datafabric) for its storage.
    • Update relevant configuration files, such as disksetup or fstab, as needed, to use the encrypted path.
  2. Restart HPE Ezmeral Services:
    • Restart services to ensure that the system is using the encrypted disk for data operations.

5. Verify Disk Encryption

To confirm the encryption is working correctly:

  1. Check the encrypted device status:

sudo cryptsetup -v status encrypted_data

  1. Confirm that the mount point is in use by HPE Ezmeral and verify that data written to the directory is stored on the encrypted disk.

Summary

Using LUKS for disk encryption on the HPE Ezmeral Data Fabric platform provides robust data-at-rest security at the storage disk level. This setup ensures that any data written to physical disks is encrypted, protecting it from unauthorized access at a hardware level.

How to encrypt disk in HPE ezmeral

In HPE Ezmeral Data Fabric (formerly MapR), disk encryption is a key component for securing data at rest. HPE Ezmeral supports data-at-rest encryption through encryption keys and policies that enable disk-level encryption, protecting data on disk without impacting application performance.

Here’s a guide to setting up disk encryption in HPE Ezmeral:

1. Prerequisites

  • HPE Ezmeral Data Fabric 6.x or 7.x installed.
  • Access to MapR Control System (MCS) or command-line interface (CLI) to configure encryption settings.
  • MapR Core Security enabled. Data encryption requires core security to be enabled for HPE Ezmeral Data Fabric.
  • Access to the MapR Key Management System (KMS), or alternatively, an external KMS can also be used, depending on your setup and security requirements.

2. Configure MapR Security and KMS (Key Management System)

  1. Enable Core Security:
    • During HPE Ezmeral installation, make sure core security is enabled. If it’s not, you’ll need to enable it as encryption depends on core security services.
  2. Configure MapR KMS:
    • The MapR KMS service handles key management for encryption. Ensure that the KMS service is running, as it is essential for generating and managing encryption keys.
    • You can check the KMS status through the MCS or by using:

maprcli kms keys list

  1. Set Up an External KMS (Optional):
    • If you need to integrate with an external KMS (such as AWS KMS or other supported key management systems), configure it to work with HPE Ezmeral as per the system’s documentation.

3. Generate Encryption Keys

  1. Use the maprcli to Generate Keys:
    • You can create encryption keys using the maprcli command. These keys are necessary for encrypting and decrypting data on the disks.
    • To create an encryption key, use:

maprcli kms keys create -keyname <encryption_key_name>

  1. Store and Manage Keys:
    • After generating the key, you can use it in volume policies or for specific datasets. Key management can be handled directly within MapR KMS or through integrated KMS if you’re using an external provider.

4. Apply Encryption Policies to Volumes

Encryption in HPE Ezmeral is typically applied at the volume level:

  1. Create a Volume with Encryption:
    • When creating a new volume, specify that it should be encrypted and assign it the encryption key generated in the previous step.
    • For example:

maprcli volume create -name <volume_name> -path /<volume_path> -encryptiontype 1 -keyname <encryption_key_name>

  1. encryptiontype 1 specifies that the volume should be encrypted.
  2. Apply Encryption to Existing Volumes:
    • You can also apply encryption to existing volumes by updating them with an encryption key.
    • Run:

maprcli volume modify -name <volume_name> -encryptiontype 1 -keyname <encryption_key_name>

  1. Verify Volume Encryption:
    • You can check the encryption status of volumes in MCS or by using:

maprcli volume info -name <volume_name>

  1. Look for the encryption field to confirm that it’s enabled.

5. Test Disk Encryption

  • Once encryption is configured, run tests to confirm data security. Write data to the encrypted volume and verify that unauthorized users or applications cannot access the data without appropriate permissions.

6. Monitor and Rotate Keys (Ongoing Maintenance)

  1. Monitor the Health of Encrypted Volumes:
    • Use the MCS or CLI to monitor the health of volumes, checking for any issues related to encryption or access.
  2. Rotate Encryption Keys:
    • To rotate keys, generate a new key and assign it to the volume. You may need to use the MapR KMS tool or an external KMS provider depending on your setup.
    • Regularly rotating keys is a best practice to enhance security.

7. (Optional) Use External KMS with HPE Ezmeral

If you prefer an external KMS, configure it as per your organization’s security requirements. External KMS options may provide additional features, such as compliance reporting and enhanced security controls.

Summary

By following these steps, you can set up and manage disk encryption within HPE Ezmeral Data Fabric, ensuring that data at rest remains protected. Regularly review encryption policies, monitor encrypted volumes, and rotate keys to maintain robust data security.

Can use dm-crypt/LUKS encryption after Linux OS is installed ?

Yes, you can use dm-crypt/LUKS to encrypt disks or specific partitions after Linux is installed, although it may require some additional steps, especially if you’re encrypting a drive that already contains data. Here’s a general guide on how to use dm-crypt/LUKS for encryption post-installation:

Steps to Use dm-crypt/LUKS for Post-Installation Disk Encryption

Option 1: Encrypting a Non-System Partition or Additional Disk

If you want to encrypt a separate partition or disk that doesn’t contain the OS (e.g., a secondary data disk), this process is straightforward.

  1. Backup Data:
    • If the disk or partition already contains data, make a backup, as this process will erase the data on the disk.
  2. Install Required Packages:
    • Ensure cryptsetup is installed.

sudo apt update

sudo apt install cryptsetup

  1. Initialize the LUKS Partition:
    • Replace /dev/sdX with the disk or partition you want to encrypt (e.g., /dev/sdb1).

sudo cryptsetup luksFormat /dev/sdX

  1. Confirm and enter a passphrase when prompted. This passphrase will be required to unlock the partition.
  2. Open the Encrypted Partition:
    • This maps the encrypted partition to a device you can interact with.

sudo cryptsetup open /dev/sdX encrypted_data

  1. Format the Partition:
    • Format the encrypted partition to your preferred file system (e.g., ext4).

sudo mkfs.ext4 /dev/mapper/encrypted_data

  1. Mount the Partition:
    • Create a mount point and mount the partition.

sudo mkdir /mnt/encrypted_data

sudo mount /dev/mapper/encrypted_data /mnt/encrypted_data

  1. Configure Automatic Mounting (Optional):
    • To have the partition prompt for a passphrase at boot, edit /etc/crypttab and /etc/fstab.
    • Add an entry to /etc/crypttab:

encrypted_data /dev/sdX none luks

  1. Then, add an entry to /etc/fstab to mount it at boot:

/dev/mapper/encrypted_data /mnt/encrypted_data ext4 defaults 0 2

Option 2: Encrypting the Root or System Partition Post-Installation

Encrypting the root or system partition after installation is more complex, as it requires bootloader changes and adjustments to the initramfs. It typically involves the following high-level steps:

  1. Boot from a Live USB: Boot from a live Linux USB to avoid active usage of the root partition.
  2. Backup System: Make a complete backup of your system, as these steps involve substantial changes to the partition.
  3. Set Up LUKS on Root Partition:
    • Unmount the root partition and initialize it with luksFormat, then reopen it.
  4. Restore Data: Restore your data to the encrypted root partition.
  5. Update Initramfs: Update your initramfs configuration to include the LUKS configuration so it can prompt for a password at boot.
  6. Update Bootloader: Modify the bootloader (e.g., GRUB) to support LUKS so that it can unlock the root partition at boot.

Important Considerations

  • Performance: LUKS encryption can introduce some performance overhead, particularly on older hardware.
  • Password Management: Store your passphrase securely, as losing it will make the data irrecoverable.
  • Backup Regularly: Encrypted partitions make data recovery complex, so regular backups are essential.

Using dm-crypt/LUKS after installation is feasible, especially for non-system partitions, and provides strong encryption for securing sensitive data on Linux.

Example on HPE ezmeral install on 12 servers cluster

Setting up an HPE Ezmeral Data Fabric cluster on 12 servers involves several key steps, including planning, installation, configuration, and validation. Here’s a step-by-step guide to get you started:

1. Plan the Cluster Configuration

  • Determine Node Roles: Decide which servers will handle specific roles. For a 12-node setup, you could designate:
    • 3 nodes for core services (e.g., CLDB, ZooKeeper, Resource Manager).
    • 9 nodes for data and compute (e.g., Node Manager, FileServer services, Spark, HBase, etc.).
  • Network and Hostname Configuration:
    • Ensure each server has a static IP address, and configure hostnames consistently across nodes.
    • Set up DNS or /etc/hosts entries for name resolution.
  • Storage: Prepare storage volumes for the Data Fabric filesystem and other data services, ideally with high-throughput storage for each node.

2. Prepare the Servers

  • OS Requirements: Install a compatible Linux distribution on each server (e.g., RHEL, CentOS, or Ubuntu).
  • User and Security Settings:
    • Create a user for Ezmeral operations (typically mapr).
    • Disable SELinux or configure it to permissive mode.
    • Ensure firewall ports are open for required services (e.g., CLDB, ZooKeeper, Warden).
  • System Configuration:
    • Set kernel parameters according to Ezmeral requirements (e.g., adjust vm.swappiness and fs.file-max settings).
    • Synchronize time across all servers with NTP.

3. Install Prerequisite Packages

  • Install necessary packages for HPE Ezmeral Data Fabric, such as Java (Oracle JDK 8), Python, and other utilities.
  • Ensure SSH key-based authentication is configured for the mapr user across all nodes, allowing passwordless SSH access.

4. Download and Install HPE Ezmeral Data Fabric Packages

  • Obtain the installation packages for HPE Ezmeral Data Fabric 7.x from HPE’s official site.
  • Install the required packages on each node, either manually or using a script. Required packages include mapr-core, mapr-cldb, mapr-zookeeper, mapr-fileserver, and mapr-webserver.

5. Install and Configure ZooKeeper

  • On the nodes designated to run ZooKeeper, install the ZooKeeper package (mapr-zookeeper) and configure it.
  • Update /opt/mapr/conf/zookeeper.conf to specify the IP addresses of all ZooKeeper nodes.
  • Start the ZooKeeper service on each of these nodes.

6. Install and Configure CLDB

  • Install the mapr-cldb package on the nodes you’ve chosen to run CLDB.
  • Configure CLDB settings in /opt/mapr/conf/cldb.conf and specify the cldb.nodes parameter to list the IPs or hostnames of your CLDB nodes.
  • Start the CLDB service on these nodes.

7. Install Other Core Services

  • Install the following packages on appropriate nodes:
    • mapr-webserver for the Control System (MCS).
    • mapr-resourcemanager and mapr-nodemanager if using YARN.
  • Start each of these services as needed.

8. Install Data and Compute Services

  • FileServer: Install mapr-fileserver on each node where you plan to store data. This typically includes all nodes except the dedicated service nodes.
  • NodeManager: Install mapr-nodemanager on the data and compute nodes if running YARN for resource management.
  • Database and Table Services: Install services like mapr-db and mapr-hbase on nodes where you need NoSQL capabilities or HBase tables.

9. Cluster Configuration

  • Use maprcli commands or the MCS web interface to configure and manage your cluster. For example:
    • Set up cluster-wide replication and volume configurations.
    • Define quotas and user permissions for data access.
    • Configure replication and fault tolerance settings.

10. Start and Verify the Cluster

  • Start all services using systemctl or service commands as needed.
  • Check the health of the cluster using:bashCopy codemaprcli node list -columns ip,svc,health
  • Access the MapR Control System (MCS) GUI by going to https://<cldb-node>:8443 and log in to confirm services are running correctly.

11. Post-Installation and Validation

  • Validate that all nodes are connected and the services are running.
  • Run health checks to ensure there are no configuration issues.
  • Monitor storage utilization and data replication to confirm that volumes and data distribution meet your needs.
  • Set up monitoring (e.g., Grafana, Prometheus) if you want real-time insight into cluster performance.

This setup provides a foundational Ezmeral Data Fabric cluster with core, data, and compute services distributed across 12 nodes for high availability and scalability. Each step includes installing necessary services, configuring core components, and verifying that the cluster operates as expected.

Deploy monitoring solution with Prometheus and Grafana on premise in HA mode

Deploy monitoring solution with Prometheus and Grafana on premise in HA mode

Deploying Prometheus and Grafana in High Availability (HA) mode on-premises ensures monitoring continuity and data resilience. In this setup, Prometheus will run in HA mode with redundancy and Grafana will connect to the Prometheus instances, also configured for HA.

Below are the steps to deploy Prometheus and Grafana on-premises with HA:


1. Plan the Architecture

  • Prometheus Instances:
    • Set up at least two Prometheus instances in HA mode for redundancy.
    • Each Prometheus instance will scrape the same set of targets independently and store its own local data.
  • Grafana Instances:
    • Deploy at least two Grafana instances in HA mode, load balanced to ensure availability.
    • Grafana will connect to both Prometheus instances and aggregate the metrics.
  • Storage:
    • Use a distributed storage system like Thanos, VictoriaMetrics, or Prometheus remote storage (like Cortex or Mimir) for long-term data storage.
    • Configure a shared storage for Grafana, or use a SQL database (e.g., MySQL, PostgreSQL) to keep dashboards and configuration in sync.

2. Set Up Prometheus in HA Mode

Step 2.1: Install Prometheus

  • Download and extract Prometheus on each node:

wget https://github.com/prometheus/prometheus/releases/download/v2.37.0/prometheus-2.37.0.linux-amd64.tar.gz

tar -xvf prometheus-2.37.0.linux-amd64.tar.gz

cd prometheus-2.37.0.linux-amd64

  • Copy the Prometheus binary to /usr/local/bin and set up the configuration directory (/etc/prometheus).

Step 2.2: Configure Prometheus

  • Create a prometheus.yml configuration file in /etc/prometheus for each instance:

global:

  scrape_interval: 15s

scrape_configs:

  – job_name: ‘your_targets’

    static_configs:

      – targets: [‘<target_ip1>:<port>’, ‘<target_ip2>:<port>’]

  • For HA, each Prometheus instance must be configured identically with the same scrape targets and rules.
  • High Availability Labeling:
    • To distinguish between HA Prometheus instances, add a –cluster.peer=<other_instance_ip>:<port> flag in each instance’s configuration.
    • This will allow the instances to work as separate, yet synchronized, peers.

Step 2.3: Start Prometheus

  • Create a systemd service file for each Prometheus instance at /etc/systemd/system/prometheus.service:

[Unit]

Description=Prometheus

After=network.target

[Service]

User=prometheus

ExecStart=/usr/local/bin/prometheus –config.file=/etc/prometheus/prometheus.yml –storage.tsdb.path=/var/lib/prometheus –web.enable-lifecycle

[Install]

WantedBy=multi-user.target

  • Enable and start each Prometheus instance:

sudo systemctl enable prometheus

sudo systemctl start prometheus


3. Install and Configure Thanos (Optional for Long-Term Storage)

  • Deploy Thanos Sidecar alongside each Prometheus instance for storing data in a distributed store and enabling HA Prometheus queries.
  • Thanos Sidecar:
    • Set up a sidecar container or service to work with each Prometheus instance.
    • It will upload data to an object storage (e.g., S3, MinIO) and enable querying of both Prometheus instances as a unified source.

4. Deploy Grafana in HA Mode

Step 4.1: Install Grafana

  • Download and install Grafana on each node:

wget https://dl.grafana.com/oss/release/grafana-8.0.0.linux-amd64.tar.gz

tar -zxvf grafana-8.0.0.linux-amd64.tar.gz

  • Copy the Grafana binaries and set up the configuration directory (/etc/grafana).

Step 4.2: Configure Grafana

  • In the Grafana configuration file (/etc/grafana/grafana.ini), set up the database to store Grafana data centrally:

[database]

type = postgres

host = <database_host>:5432

name = grafana

user = grafana_user

password = grafana_password

  • Add both Prometheus instances as data sources in Grafana. Grafana will automatically handle HA and load balancing between them.

Step 4.3: Start Grafana

  • Set up a systemd service for Grafana:

[Unit]

Description=Grafana

After=network.target

[Service]

User=grafana

ExecStart=/usr/local/bin/grafana-server -config /etc/grafana/grafana.ini

[Install]

WantedBy=multi-user.target

  • Enable and start Grafana:

sudo systemctl enable grafana-server

sudo systemctl start grafana-server


5. Set Up Load Balancers for HA

  • Prometheus Load Balancer:
    • Set up a load balancer in front of the Prometheus instances to ensure that requests are evenly distributed across instances.
  • Grafana Load Balancer:
    • Set up another load balancer for the Grafana instances to distribute user access and enable failover.

6. Verify and Test the HA Setup

  • Prometheus:
    • Test that both Prometheus instances are running independently by accessing them via <node_ip>:9090.
    • Use Thanos Querier (if configured) to query both Prometheus instances as a single source.
  • Grafana:
    • Log in to Grafana via the load balancer IP, add Prometheus as a data source, and create a sample dashboard.
    • Simulate a failure on one Grafana instance and ensure that the other instance handles the load transparently.

7. Enable Monitoring and Alerting

  • Configure Alertmanager for Prometheus:
    • Set up Alertmanager to handle alerts in case of any issues.
    • Use HA by deploying multiple Alertmanager instances with clustering.
  • Set up alerts in Grafana for visualization and notifications based on key metrics and alert rules.

Summary of Key Points

  • HA Prometheus: Multiple Prometheus instances scraping the same targets, optionally with Thanos for long-term storage and aggregation.
  • HA Grafana: Multiple Grafana instances with a centralized database for dashboards, load-balanced to ensure redundancy.
  • Alerting: Use Alertmanager in HA mode to handle alerts from Prometheus.

This HA setup for Prometheus and Grafana provides a robust monitoring solution that is resilient, scalable, and fault-tolerant.

unable to verify the first certificate – KONG

unable to verify the first certificate

The error “unable to verify the first certificate” typically indicates that the client (or server) cannot verify the certificate because it does not have the correct root certificate, intermediate certificate, or the certificate chain is incomplete. Here’s how you can troubleshoot and resolve this issue:

Common Causes of the Error:

  1. Missing Root or Intermediate Certificates
    • The server or client lacks the necessary CA certificates to complete the chain of trust.
  2. Self-Signed Certificate
    • If the certificate is self-signed, the server or client must explicitly trust the certificate.
  3. Incomplete Certificate Chain
    • The server might not be sending the entire certificate chain (intermediate certificates) along with the server certificate.
  4. Incorrect Client/Server Configuration
    • The client or server may not be configured to trust the CA that issued the certificate.

Steps to Fix the Issue:

1. Verify the Certificate Chain

Check whether the certificate chain is complete. You can use openssl to check the chain from the client side:

openssl s_client -connect your-kong-api:443 -showcerts

This will show the certificates presented by the server. Verify if the server provides the full chain, including the intermediate certificates.

2. Install Missing CA Certificates on the Client

If the client does not trust the CA that issued the server certificate, you need to install the root CA certificate on the client. For example, on most systems, you can install the CA certificates by adding them to the trusted certificate store.

  • Linux (e.g., Ubuntu/Debian): Copy the CA certificate (e.g., ca.crt) to the /usr/local/share/ca-certificates/ directory and then update the certificate store:

sudo cp ca.crt /usr/local/share/ca-certificates/

sudo update-ca-certificates

  • Windows: Import the root certificate into the Trusted Root Certification Authorities store via the Certificate Manager.
  • macOS: You can import the CA certificate into Keychain Access and mark it as trusted.

3. Verify the Server-Side Configuration

If you’re managing the server (e.g., Kong API Gateway), ensure that the server is sending the complete certificate chain. You can provide both the server certificate and any intermediate certificates in the configuration.

For example, in Kong, you can configure SSL certificates like this:

curl -i -X POST http://localhost:8001/certificates \

–form “cert=@/path/to/server.crt” \

–form “key=@/path/to/server.key” \

–form “cert_alt=@/path/to/intermediate.crt”

Ensure that the server certificate and intermediate certificates are included in the cert field.

4. Test the Connection with the Correct CA

When testing using curl, ensure you include the correct root CA or intermediate CA:

curl -v –cacert /path/to/ca.crt https://your-kong-api/your-route

This will make curl use the specified CA certificate for validation.

5. Check for Self-Signed Certificates

If you’re using self-signed certificates, you’ll need to make sure that both the client and server are explicitly configured to trust the self-signed certificate.

For example, when using curl:

curl -v –key client.key –cert client.crt –cacert ca.crt https://your-kong-api/

If the certificate is self-signed and you want to bypass certificate verification (not recommended in production), you can use:

curl -v –insecure https://your-kong-api/

6. Include the Correct Intermediate Certificates

If the server is not sending the intermediate certificate (or you forgot to add it), make sure that the intermediate certificate is included in the chain. You can concatenate the server certificate and intermediate certificate into one file:

cat server.crt intermediate.crt > full_chain.crt

Then, use the full chain certificate for your server configuration.

Summary of Steps:

  1. Check the certificate chain using openssl s_client to see if all necessary certificates are presented.
  2. Ensure the client trusts the root CA by installing the root or intermediate CA on the client.
  3. Ensure the server presents the full certificate chain (server certificate + intermediate certificates).
  4. Test the connection using the correct CA with curl or another tool.
  5. Handle self-signed certificates by explicitly trusting them or bypassing verification in non-production environments.

By following these steps, you should be able to resolve the “unable to verify the first certificate” issue.

com.pingidentity.pf.datastore an error occured while testing the connection Error PKIX path building failed

The error you’re encountering, PKIX path building failed, is related to SSL certificate validation in PingIdentity while attempting to establish a connection. Specifically, it indicates that the system could not verify the certificate path back to a trusted root certificate authority (CA). This is a common issue when a server certificate is either self-signed or not recognized by the trust store.

Here’s a breakdown of the error:

  1. PKIX path building failed: This means that Java’s SSL/TLS system could not build a valid certificate chain back to a trusted root certificate authority.
  2. SunCertPathBuilderException: unable to find valid certification path to requested target: This suggests that the certificate presented by the remote system is not trusted by the client making the request. The client’s trust store (Java keystore) does not have the necessary CA certificates to validate the server certificate.

Causes:

  • Self-signed certificate: If the server you’re trying to connect to uses a self-signed certificate, it won’t be trusted automatically.
  • Untrusted CA: The certificate is signed by a CA that’s not included in the default trust store of the Java Virtual Machine (JVM).
  • Missing intermediate certificates: The certificate chain might be incomplete, missing intermediate certificates between the server’s certificate and the trusted root.
  • Expired or revoked certificates: The server certificate could be expired or revoked, leading to validation failure.

Solutions:

1. Import the Server’s Certificate to the Java Truststore

You need to import the server’s certificate (or the intermediate CA certificates) into the Java trust store to ensure it’s recognized as a trusted certificate.

Steps:

  • Obtain the server certificate:

openssl s_client -connect <server-host>:<port> -showcerts

This will return the server’s SSL certificate chain. Copy the relevant certificate (in PEM format).

  • Save the certificate as server.crt.
  • Import the certificate into the Java trust store:

keytool -import -alias <alias_name> -keystore <path_to_java_home>/lib/security/cacerts -file server.crt

The default password for the trust store is usually changeit, but this can vary.

  • Restart your PingFederate server or application to ensure the new trust store is loaded.

2. Use a Valid SSL Certificate

  • If the server is using a self-signed certificate, consider replacing it with one signed by a trusted public CA (e.g., Let’s Encrypt, GlobalSign, etc.).
  • Ensure the entire certificate chain, including intermediate certificates, is properly configured on the server.

3. Disable SSL Validation (Not Recommended in Production)

You can temporarily disable SSL validation to allow connections without certificate verification. This is usually done in testing environments or when working with self-signed certificates.

Example: In Java, you can disable SSL certificate validation by setting a custom trust manager, but this approach is not secure and should be avoided in production:

import javax.net.ssl.*;

import java.security.SecureRandom;

import java.security.cert.X509Certificate;

TrustManager[] trustAllCerts = new TrustManager[]{

    new X509TrustManager() {

        public X509Certificate[] getAcceptedIssuers() {

            return null;

        }

        public void checkClientTrusted(X509Certificate[] certs, String authType) {}

        public void checkServerTrusted(X509Certificate[] certs, String authType) {}

    }

};

SSLContext sc = SSLContext.getInstance(“SSL”);

sc.init(null, trustAllCerts, new SecureRandom());

HttpsURLConnection.setDefaultSSLSocketFactory(sc.getSocketFactory());

This solution is not recommended for production environments.

4. Verify the Server Certificate

Ensure that the certificate on the server is not expired, properly configured, and includes all intermediate certificates. You can use tools like:

openssl s_client -connect <server-host>:<port> -showcerts

5. Update the JVM Truststore (if outdated)

If you are using an old version of Java, the default trust store may not include modern root CAs. You can update your JVM or manually update the CA certificates:

  • Download a fresh version of the cacerts file from the latest JVM or a trusted source.
  • Replace your current cacerts file (located in JAVA_HOME/lib/security) with the new one.

6. Proxy Configuration

If you’re using a proxy, make sure the proxy server has the necessary CA certificates and that PingFederate is properly configured to connect through the proxy.

Conclusion

To resolve the PKIX path building failed error, you will likely need to add the server certificate (or its CA) to your Java trust store. Ensure the server’s certificate chain is correctly configured and, if using self-signed certificates, import them into the trust store. Avoid disabling SSL validation in production environments due to security risks.

PING Identity

The error you’re encountering is related to certificate validation and occurs when the system is unable to establish a valid certificate chain to the requested target (usually an external service or API). Specifically, the error message:

SunCertPathBuilderException: unable to find valid certification path to requested target

indicates that the Java application (in this case, Ping Identity) is trying to connect to an HTTPS service, but the service’s SSL/TLS certificate is either:

  • Self-signed, or
  • Issued by a Certificate Authority (CA) that is not trusted by the Java trust store.

Here’s how you can troubleshoot and resolve this issue:

Step 1: Verify the SSL/TLS Certificate

  1. Check the service’s certificate: Use a browser or a tool like openssl to verify the service’s certificate chain:

openssl s_client -connect <hostname>:<port> -showcerts

This will display the certificate chain used by the server. Ensure that the server certificate is valid and that intermediate and root certificates are also included.

  1. Check if the certificate is self-signed: If the service is using a self-signed certificate or a certificate from a CA not included in the default Java trust store, you’ll need to manually trust it.

Step 2: Add the Certificate to the Java Trust Store

You’ll need to import the certificate into the Java trust store so that Java can trust it.

  1. Export the server certificate:
    • Save the certificate to a file using your browser or the openssl command.

For example:

echo | openssl s_client -connect <hostname>:<port> | sed -ne ‘/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p’ > server-cert.pem

  1. Import the certificate into Java’s trust store: Use the keytool command to import the certificate:

sudo keytool -import -alias <alias-name> -file <path-to-cert> -keystore $JAVA_HOME/lib/security/cacerts

Replace:

  1. <alias-name>: A unique alias for the certificate.
  2. <path-to-cert>: The path to the certificate file you saved.

The default password for the Java trust store is usually changeit.

  1. Verify the certificate import: You can verify if the certificate has been successfully imported by listing the contents of the trust store:

sudo keytool -list -keystore $JAVA_HOME/lib/security/cacerts

Step 3: Test the Connection Again

After importing the certificate, restart your application and test the connection again to ensure the error is resolved.

Additional Considerations:

  • Custom Trust Store: If your application is using a custom trust store (not the default Java trust store), ensure that the certificate is added to that trust store instead.
  • CA Certificates: If the certificate is from a trusted CA, ensure that your system has the correct root CA certificates in its trust store.

By importing the certificate into the Java trust store, you should be able to resolve the PKIX path building failed error and establish a successful connection.

How to verify mtls client cert via curl in Kong API gateway

To verify mTLS (Mutual TLS) client certificates in Kong API Gateway using curl, you will need to have:

  1. A valid client certificate and key that the server can verify.
  2. The server’s CA certificate to trust the server’s certificate.

Here’s a step-by-step guide to verify the mTLS setup using curl:

1. Ensure mTLS is enabled in Kong

Make sure that the mTLS authentication plugin is enabled on the route, service, or globally in your Kong instance. For example, you can check if the mtls-auth plugin is applied:

curl http://localhost:8001/plugins

If it’s not enabled, you can add the mTLS plugin to a specific route or service:

curl -i -X POST http://localhost:8001/services/{service-id}/plugins \

    –data “name=mtls-auth”

2. Ensure the Client Certificate is Associated with a Consumer

To match the client certificate with a consumer in Kong, you need to associate the Common Name (CN) or Subject Alternative Name (SAN) in the certificate with a consumer.

curl -i -X POST http://localhost:8001/consumers/{consumer-id}/mtls_auth \

    –data “subject_name=client.example.com”

This ensures that when the client presents the certificate, Kong will map the request to the correct consumer based on the CN.

3. Verify mTLS using curl

To perform mTLS using curl, you will need:

  • The client certificate and client key.
  • The CA certificate that Kong uses to verify the client certificate.

Use the following curl command to verify mTLS:

bash

Copy code

curl -v https://{KONG_HOST}:{KONG_PORT}/{route-path} \

    –cert /path/to/client-cert.pem \

    –key /path/to/client-key.pem \

    –cacert /path/to/ca-cert.pem

  • –cert /path/to/client-cert.pem: Path to the client certificate file.
  • –key /path/to/client-key.pem: Path to the client private key file.
  • –cacert /path/to/ca-cert.pem: Path to the CA certificate that Kong uses to verify the client certificate.
  • -v: Verbose output to see the handshake details.

4. Example Command

For example, if Kong is running on localhost, the route is /secure-service, and you have the client certificate and key:

curl -v https://localhost:8443/secure-service \

    –cert /etc/ssl/client-cert.pem \

    –key /etc/ssl/client-key.pem \

    –cacert /etc/ssl/ca-cert.pem

5. Check the Response

  • If the client certificate is valid and trusted, Kong will allow the request, and you will see a 200 OK response (or the relevant service response).
  • If the certificate validation fails, you may see errors like:
    • 400 Bad Request: Indicates issues with the certificate verification.
    • 403 Forbidden: Indicates the certificate was valid but the client was not authorized for the route.

6. Common Errors and Troubleshooting

  • 400 Bad Request: SSL certificate validation failed: This usually happens when the client certificate is not signed by the trusted CA or doesn’t match the subject name expected by Kong.
  • 403 Forbidden: mTLS authentication failed: This happens if the certificate’s CN or SAN does not match any configured consumers in Kong.

Conclusion

Verifying mTLS in Kong API Gateway via curl involves ensuring that the client certificate and key are properly configured and Kong is set up to validate them. If configured correctly, Kong will authenticate the client using the certificate, and the request will proceed.

Hadoop ha Active/Active vs Active/Passive

Hadoop High Availability (HA): Active/Active vs. Active/Passive

When designing a Hadoop High Availability (HA) solution, two common approaches are Active/Active and Active/Passive. These strategies help ensure data and service availability across failures and disasters. Let’s compare them in detail to help you understand their differences, benefits, challenges, and use cases.


1. Active/Active Hadoop Architecture

Overview:

  • Both sites are fully operational and handling workloads simultaneously.
  • Both clusters actively serve requests, and the load can be distributed between them.
  • Data is replicated between the sites, ensuring both sites are synchronized.

Key Components:

  • HDFS Federation: Each site has its own NameNode that manages a portion of the HDFS namespace.
  • YARN ResourceManager: Each site runs its own ResourceManager, coordinating job execution locally, but the jobs can be balanced between sites.
  • Zookeeper & JournalNodes Quorum: Spread across both sites to provide consistency and manage service coordination.
  • Cross-Site Replication: Hadoop’s DistCp or HDFS replication is used to replicate data across sites.
  • Hive/Impala Metastore: Shared between sites, ensuring consistent metadata.

Advantages:

  1. Load Balancing: Traffic and workloads can be distributed between the two active sites, reducing pressure on a single site.
  2. Low Recovery Time: In case of a site failure, the other site can immediately handle all workloads without downtime.
  3. Improved Resource Utilization: Both sites are fully operational, utilizing available resources efficiently.
  4. Fast Failover: If one site fails, the remaining site continues operating without needing to bring up services.

Challenges:

  1. Increased Complexity: Managing two active sites involves more complex setup, including federation, data replication, and synchronization.
  2. Data Consistency: Ensuring both sites have up-to-date data requires robust replication mechanisms and careful coordination.
  3. Conflict Resolution: Handling conflicting updates across both sites requires careful planning and automated conflict resolution strategies.

Operational Considerations:

  • Synchronization of Data: Ensure real-time or near real-time data replication across both sites.
  • Federated HDFS: Requires splitting data across multiple namespaces with NameNodes in each site.
  • Network Requirements: Reliable, high-bandwidth network links are essential for cross-site replication and synchronization.
  • Monitoring and Automation: Continuous monitoring of job failures, resource usage, and automatic load balancing/failover processes.

Best Use Cases:

  • Mission-Critical Workloads: Where zero downtime and continuous availability are essential.
  • Geographically Distributed Sites: When there is a need for global load balancing or when sites are geographically distant but still need to function as one.
  • High Load Systems: Systems that need to distribute workloads across multiple data centers to balance processing power.

2. Active/Passive Hadoop Architecture

Overview:

  • The Primary (Active) site handles all the workloads, while the Secondary (Passive) site is on standby.
  • In case of failure or disaster, the passive site takes over and becomes the active one.
  • The secondary site is synchronized with the active site, but it does not actively serve any workloads until failover occurs.

Key Components:

  • Active and Standby NameNodes: The active site runs the main NameNode, while the passive site hosts a standby NameNode.
  • YARN ResourceManager: Active ResourceManager at the primary site, standby ResourceManager at the secondary site.
  • Zookeeper & JournalNode Quorum: Distributed across both sites for fault tolerance and coordination.
  • HDFS Replication: Ensures data is replicated across both sites using HDFS data blocks.
  • Hive/Impala Metastore: Either synchronized or replicated between the two sites for metadata consistency.

Advantages:

  1. Simpler Setup: Easier to configure and manage compared to Active/Active architecture.
  2. Cost-Efficient: Since the passive site is not active until failover, fewer resources are consumed.
  3. Data Integrity: With a single active site at a time, data conflicts and consistency issues are less likely.
  4. Disaster Recovery: Ensures quick recovery of services in the event of failure or disaster in the primary site.

Challenges:

  1. Failover Time: There can be a delay in switching over from the active site to the passive site.
  2. Underutilized Resources: The passive site is mostly idle, which can lead to inefficient resource use.
  3. Single Point of Failure: Until failover occurs, there is a reliance on the primary site, creating a risk of downtime.
  4. Data Replication: You need to ensure that the passive site has the latest data in case of a failover.

Operational Considerations:

  • Automated Failover: Implement automated failover mechanisms using Zookeeper and JournalNodes to reduce downtime.
  • Data Synchronization: Ensure regular and real-time synchronization between the two sites to avoid data loss.
  • Disaster Recovery Testing: Regularly test the failover process to ensure that the passive site can take over with minimal downtime.
  • Backup and Monitoring: Maintain backups and monitor the status of both sites to detect any potential failures early.

Best Use Cases:

  • Cost-Conscious Environments: When you need a disaster recovery solution but don’t want the expense of running both sites at full capacity.
  • Disaster Recovery Scenarios: When one site is meant purely for recovery in case of major failure or disaster at the primary site.
  • Low-Volume Operations: When your workloads don’t justify the complexity and overhead of an active/active setup.

How to integrate mTLS Kong certificate with secrets management infrastructure

To integrate mTLS (Mutual TLS) certificates used in Kong API Gateway with a secrets management infrastructure (such as HashiCorp Vault, AWS Secrets Manager, or other secret management tools), you can follow a systematic approach to store, retrieve, and rotate the certificates securely.

Key Steps:

  1. Generate mTLS certificates.
  2. Store certificates securely in the secrets management infrastructure.
  3. Configure Kong to retrieve and use certificates for mTLS.
  4. Automate certificate rotation for secure management.

Step 1: Generate mTLS Certificates

You need to generate the client and server certificates that Kong will use for mTLS.

  1. Generate a Certificate Authority (CA): First, generate a CA to sign the certificates.

openssl genrsa -out ca.key 2048

openssl req -x509 -new -nodes -key ca.key -sha256 -days 365 -out ca.crt \

-subj “/CN=Kong-CA”

  1. Generate the Server Certificate: Generate a private key and a certificate signing request (CSR) for Kong, and sign it with your CA.

openssl genrsa -out kong-server.key 2048

openssl req -new -key kong-server.key -out kong-server.csr -subj “/CN=kong-server”

openssl x509 -req -in kong-server.csr -CA ca.crt -CAkey ca.key -CAcreateserial \

-out kong-server.crt -days 365 -sha256

  1. Generate the Client Certificate: You also need a client certificate to authenticate incoming requests.

openssl genrsa -out kong-client.key 2048

openssl req -new -key kong-client.key -out kong-client.csr -subj “/CN=kong-client”

openssl x509 -req -in kong-client.csr -CA ca.crt -CAkey ca.key -CAcreateserial \

-out kong-client.crt -days 365 -sha256

Step 2: Store Certificates Securely in a Secrets Management Infrastructure

Use a secrets management service like HashiCorp Vault, AWS Secrets Manager, or another system to store the mTLS certificates securely.

Example: Store Certificates in HashiCorp Vault

  1. Start by enabling the secrets engine in Vault:

vault secrets enable -path=pki pki

vault secrets tune -max-lease-ttl=87600h pki

  1. Store the CA certificate in Vault:

vault write pki/config/ca pem_bundle=@ca.crt

  1. Store the server certificate and key in Vault:

vault kv put secret/kong/server cert=@kong-server.crt key=@kong-server.key

  1. Store the client certificate and key:

vault kv put secret/kong/client cert=@kong-client.crt key=@kong-client.key

Example: Store Certificates in AWS Secrets Manager

  1. Use AWS CLI to store the server certificate:

aws secretsmanager create-secret –name kong/server \

–secret-string file://kong-server.json

The kong-server.json file contains:

{

  “certificate”: “—–BEGIN CERTIFICATE—– …”,

  “private_key”: “—–BEGIN PRIVATE KEY—– …”

}

  1. Store the client certificate similarly in AWS Secrets Manager:

aws secretsmanager create-secret –name kong/client \

–secret-string file://kong-client.json

Step 3: Configure Kong to Retrieve and Use Certificates for mTLS

Once the certificates are securely stored, you need to configure Kong to retrieve and use these certificates from your secrets management infrastructure.

Option 1: Use HashiCorp Vault with Kong

  1. Install the Vault Plugin for Kong (or use an external script to retrieve certificates from Vault).
  2. Write a Lua script or custom plugin to dynamically retrieve the certificates from Vault.
    • Use the vault kv get API to retrieve certificates from Vault.
    • Load the certificates into Kong’s SSL configuration dynamically.
  3. Configure Kong to use the certificates:
    • Add the mTLS plugin to your service or route to enable mutual authentication using the retrieved certificates.

Example to configure the mTLS plugin:

curl -i -X POST http://localhost:8001/services/my-service/plugins \

–data “name=mtls-auth” \

–data “config.trusted_ca_ids=ca_id” \

–data “config.client_verify=true”

Option 2: Use AWS Secrets Manager with Kong

  1. Install the AWS SDK (or a script) on your Kong instances to fetch the certificates from Secrets Manager.
  2. Create a script or custom plugin to:
    • Retrieve the server and client certificates from AWS Secrets Manager using the AWS CLI or SDK:

aws secretsmanager get-secret-value –secret-id kong/server

aws secretsmanager get-secret-value –secret-id kong/client

  1. Dynamically load the certificates into Kong’s SSL configuration using Lua or custom logic.
  2. Configure mTLS in Kong:
    • Set up the mTLS plugin in Kong to verify client certificates:

curl -i -X POST http://localhost:8001/services/my-service/plugins \

–data “name=mtls-auth” \

–data “config.trusted_ca_ids=<ca-id>”

Step 4: Automate Certificate Rotation

To ensure secure and automated certificate management, integrate certificate rotation.

  1. Automated Certificate Renewal in Vault:
    • Configure Vault’s PKI secrets engine to automate certificate issuance and rotation.

Example:

vault write pki/roles/kong \

allowed_domains=example.com \

allow_subdomains=true \

max_ttl=72h

Use Vault’s pki/issue endpoint to automatically rotate certificates and replace them in Kong.

  1. Automate AWS Secrets Manager Rotation:
    • Set up AWS Secrets Manager’s built-in rotation feature for SSL certificates.
  2. Trigger Updates in Kong:
    • Use a periodic task (e.g., a cron job or Ansible playbook) to update the certificates in Kong without restarting the gateway:

kong reload

This ensures Kong always uses the latest mTLS certificates from the secrets manager.

Conclusion

To integrate Kong API Gateway’s mTLS certificates with a secrets management infrastructure, follow these steps:

  1. Generate the mTLS certificates and store them securely in a secrets management tool (e.g., HashiCorp Vault, AWS Secrets Manager).
  2. Configure Kong to retrieve the certificates dynamically from the secret manager.
  3. Implement automation for certificate renewal and rotation to ensure that Kong always uses up-to-date certificates without manual intervention.

This approach enhances security by managing sensitive SSL certificates in a centralized and automated manner.

generate CA certificate for Kong API gateway and configure with mTLS

To generate a Certificate Authority (CA) certificate for Kong Gateway and configure it for mTLS (Mutual TLS), follow these steps. This process involves creating a root CA, generating client certificates, and setting up Kong to use them for mTLS authentication.

Steps Overview:

  1. Generate your own Certificate Authority (CA).
  2. Use the CA to sign client certificates.
  3. Upload the CA certificate to Kong.
  4. Configure Kong to enforce mTLS using the CA.
  5. Test the mTLS setup.

1. Generate a Certificate Authority (CA)

1.1. Generate the CA’s Private Key

openssl genrsa -out ca.key 2048

This command generates a 2048-bit RSA private key for your CA.

1.2. Create a Self-Signed Certificate for the CA

openssl req -x509 -new -nodes -key ca.key -sha256 -days 3650 -out ca.crt \

-subj “/C=US/ST=State/L=City/O=Organization/OU=OrgUnit/CN=Your-CA-Name”

  • This command creates a self-signed certificate valid for 10 years (3650 days).
  • Customize the -subj fields with your information.

You now have two files:

  • ca.key: The CA’s private key (keep this secure).
  • ca.crt: The CA’s self-signed certificate, which you will use to sign client certificates.

2. Generate and Sign Client Certificates

2.1. Generate the Client’s Private Key

openssl genrsa -out client.key 2048

2.2. Create a Certificate Signing Request (CSR) for the Client

openssl req -new -key client.key -out client.csr -subj “/C=US/ST=State/L=City/O=Organization/OU=OrgUnit/CN=Client-Name”

2.3. Sign the Client’s Certificate with the CA

openssl x509 -req -in client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out client.crt -days 365 -sha256

This command signs the client certificate (client.crt) with your CA. The client.crt is valid for 1 year (365 days).

You now have:

  • client.key: The client’s private key.
  • client.crt: The client’s signed certificate.

3. Upload the CA Certificate to Kong

Kong needs the CA certificate to validate the client certificates during mTLS authentication. You can upload the CA certificate to Kong as follows:

curl -i -X POST http://localhost:8001/ca_certificates \

  –data “cert=@/path/to/ca.crt”

This will make Kong aware of the trusted CA certificate, enabling it to validate client certificates that are signed by this CA.


4. Enable the mTLS Plugin in Kong

Now, configure Kong to enforce mTLS for a service or route using the mTLS Authentication plugin. This plugin requires clients to present a certificate signed by the CA.

4.1. Enable mTLS for a Service

To enable mTLS authentication on a specific service:

curl -i -X POST http://localhost:8001/services/<service_id>/plugins \

  –data “name=mtls-auth”

Replace <service_id> with the actual service ID.

4.2. Enable mTLS for a Route

Alternatively, you can enable mTLS for a specific route:

curl -i -X POST http://localhost:8001/routes/<route_id>/plugins \

  –data “name=mtls-auth”

By default, the plugin will validate the client certificate against the CA certificate you uploaded in Step 3.


5. Configure Trusted Certificate IDs (Optional)

If you have multiple CA certificates, you can specify which ones to trust. You can update the mTLS plugin configuration to use the correct CA certificate ID:

curl -i -X PATCH http://localhost:8001/plugins/<plugin_id&gt; \

  –data “config.trusted_certificate_ids=<ca_certificate_id>”


6. Test the mTLS Setup

6.1. Test Using Curl

To test the mTLS setup, make a request to your Kong service or route while providing the client certificate and private key:

curl -v –cert client.crt –key client.key https://<kong-gateway-url>/your-service-or-route

This request should succeed if the client certificate is valid. If the client certificate is invalid or not provided, the request will fail with an error.


Summary

  1. Generate a Certificate Authority (CA): Use OpenSSL to generate a root CA (ca.key and ca.crt).
  2. Create and sign client certificates: Sign client certificates using the CA (client.crt and client.key).
  3. Upload the CA certificate to Kong (ca.crt).
  4. Enable the mTLS Authentication plugin for services or routes in Kong.
  5. Test mTLS by making requests using the client certificates.

By following these steps, Kong Gateway will be configured to enforce mTLS, ensuring that only clients with valid certificates signed by your CA can access your services.

Configure mTLS plugin for kong api gateway

Kong API Gateway offers a few plugins to handle mutual TLS (mTLS) authentication and related features. These plugins ensure that clients are authenticated using certificates, providing an additional layer of security beyond standard TLS encryption. Key mTLS Plugins for Kong API Gateway mtls-auth Plugin (Kong Enterprise) Mutual TLS Authentication Plugin (Kong Gateway OSS) Basic Authentication with mTLS (Combined Usage) Custom mTLS Logic with Lua (Advanced Use Case) 1. mtls-auth Plugin (Kong Enterprise) Description: This plugin is available in Kong Enterprise and is specifically designed for mTLS authentication. It validates the client certificate presented during the TLS handshake against a set of CA certificates stored in Kong. Features: Validates client certificates using specified CA certificates. Supports multiple CA certificates. Can pass the client certificate information to upstream services. Configurable to allow or restrict access based on client certificate IDs. Configuration Options: config.ca_certificates: List of CA certificate IDs used to verify client certificates. config.allowed_client_certificates: List of client certificate IDs allowed to access the service or route. config.pass_client_cert: Boolean to decide whether to pass client certificate info to upstream services. 2. Mutual TLS Authentication Plugin (Kong Gateway OSS) Description: This plugin provides basic mTLS functionality in the open-source version of Kong. It requires client certificates for authentication and validates them against the provided CA certificates. Features: Validates client certificates using CA certificates. Simpler than the mtls-auth plugin and may not support advanced enterprise features. Configuration Options: ca_certificates: Array of CA certificate IDs for validation. allowed_client_certificates: Array of specific client certificates IDs. 3. Basic Authentication with mTLS (Combined Usage) Description: Although not an mTLS plugin by itself, Kong allows combining basic authentication plugins (like basic-auth) with mTLS for a two-layered authentication approach. Usage: Apply both the basic-auth plugin and the mtls-auth plugin to a service or route. Requires both a valid client certificate and a valid basic authentication credential. 4. Custom mTLS Logic with Lua (Advanced Use Case) Description: For advanced use cases where you need custom mTLS handling beyond what the plugins provide, you can use Kong’s serverless capabilities to write custom logic using Lua in a plugin like serverless-functions. Use Cases: Custom certificate validation logic. Dynamic CA certificate selection. Additional logging and monitoring for mTLS events. Choosing the Right Plugin For Enterprise Needs: If you have a Kong Enterprise license, the mtls-auth plugin is the most feature-rich option, offering advanced mTLS configurations and management capabilities. For Open Source Users: The Mutual TLS Authentication Plugin is available in Kong Gateway OSS but with fewer features. It’s suitable for basic mTLS needs. For Custom Logic: If your use case requires custom logic, consider using Lua scripting with serverless-functions to implement advanced mTLS workflows. Conclusion These plugins allow Kong to enforce mTLS for enhanced security. The choice between them depends on your version of Kong (Enterprise vs. OSS) and your specific security requirements. Configure mTLS plugin for kong api gateway

To configure mTLS (mutual TLS) in Kong API Gateway, you need to use the mtls-auth plugin, which validates client certificates against a set of trusted Certificate Authorities (CA). This process involves uploading the CA certificate to Kong, enabling the mtls-auth plugin for a service or route, and testing the configuration.
Steps to Configure mTLS in Kong
1. Upload CA Certificate to Kong
2. Enable the mtls-auth Plugin for a Service or Route
3. Test the mTLS Configuration
Step 1: Upload CA Certificate to Kong
You must upload the CA certificate to Kong so it can validate the client certificates.
1. Upload the CA Certificate using Kong Admin API:

curl -i -X POST http://:8001/ca_certificates \
–data “cert=@/path/to/ca.crt”
o Replace with your Kong Admin URL.
o This will upload the CA certificate to Kong, which will then be used to verify client certificates.
2. Check the Uploaded CA Certificate: Verify that the CA certificate has been uploaded correctly by listing all CA certificates:

curl -i -X GET http://:8001/ca_certificates
Step 2: Enable the mtls-auth Plugin
1. Enable the Plugin on a Service: You can apply the mtls-auth plugin to a specific service in Kong.

curl -i -X POST http://:8001/services//plugins \
–data “name=mtls-auth” \
–data “config.ca_certificates=” \
–data “config.allow_any_client_cert=true”
o Replace with the name or ID of the service you want to protect.
o Replace with the ID of the CA certificate you uploaded in step 1.
o allow_any_client_cert=true allows any client certificate issued by the uploaded CA to access the service.
2. Enable the Plugin on a Route: Alternatively, you can apply the plugin to a specific route.

curl -i -X POST http://:8001/routes//plugins \
–data “name=mtls-auth” \
–data “config.ca_certificates=” \
–data “config.allow_any_client_cert=true”
o Replace with the ID of the route you want to protect.
3. Optional Configuration Options:
o config.pass_client_cert=false: By default, the plugin does not pass the client certificate to the upstream service. Set this to true if you want to pass it.
o config.allowed_client_certificates: You can specify individual client certificate IDs if you want to allow only specific certificates.
Step 3: Test the mTLS Configuration
1. Test with a Valid Client Certificate: Make a request to the service or route using a client certificate signed by the trusted CA.
curl -v https://:/ \
–cert /path/to/client.crt \
–key /path/to/client.key
o If everything is configured correctly, you should receive a successful response.
2. Test with an Invalid or No Client Certificate: Try making a request without a client certificate or with an invalid one.
curl -v https://:/
o You should receive a 401 Unauthorized or 403 Forbidden response, indicating that the client certificate validation failed.
Additional Considerations
• Certificate Renewal: If you update your CA or client certificates, remember to update them in Kong as well.
• Multiple CA Certificates: You can upload multiple CA certificates to Kong and specify them in the config.ca_certificates array when configuring the plugin.
• Error Handling: If you encounter errors, check the Kong logs for detailed messages that can help diagnose issues.
Summary
1. Upload CA Certificate: Use the Admin API to upload the CA certificate.
2. Enable mTLS Plugin: Configure the mtls-auth plugin on your desired service or route, specifying the CA certificate.
3. Test and Verify: Ensure that the setup is correct by testing with valid and invalid client certificates.
By following these steps, you can configure mTLS in Kong to secure your API services, ensuring that only clients with trusted certificates can access them.