shell script

!/bin/bash

List of your servers (can be IPs or hostnames)

SERVERS=(
server1.example.com
server2.example.com
server3.example.com
server4.example.com
server5.example.com
server6.example.com
)

FILE_PATH=”/opt/pfengine/file.txt”

for server in “${SERVERS[@]}”; do
echo “๐Ÿ” Checking $FILE_PATH on $server”

ssh -o ConnectTimeout=5 “$server” “ls -l $FILE_PATH” 2>/dev/null

if [ $? -ne 0 ]; then
echo “โŒ Could not access file on $server”
fi

echo “————————————–“
done

Allow LDAP users to access the Kong Manager GUI in Kong Gateway

To allow LDAP users to access the Kong Manager GUI in Kong Gateway Enterprise 3.4, you’ll need to integrate LDAP authentication via the Kong Enterprise Role-Based Access Control (RBAC) system.

Hereโ€™s how you can get it working step-by-step ๐Ÿ‘‡


๐Ÿ‘ค Step 1: Configure LDAP Authentication for Kong Manager

Edit your kong.conf or pass these as environment variables if you’re using a container setup.

admin_gui_auth = ldap-auth
admin_gui_auth_conf = {
  "ldap_host": "ldap.example.com",
  "ldap_port": 389,
  "ldap_base_dn": "dc=example,dc=com",
  "ldap_attribute": "uid",
  "ldap_bind_dn": "cn=admin,dc=example,dc=com",
  "ldap_password": "adminpassword",
  "start_tls": false,
  "verify_ldap_host": false
}

โœ… If youโ€™re using LDAPS, set ldap_port = 636 and start_tls = false or configure accordingly.

Restart Kong after updating this config.


๐Ÿ‘ฅ Step 2: Create an RBAC User Linked to the LDAP Username

Kong still needs an RBAC user that maps to the LDAP-authenticated identity.

curl -i -X POST http://localhost:8001/rbac/users \
  --data "name=jdoe" \
  --data "user_token=jdoe-admin-token"

The name here must match the LDAP uid or whatever attribute you configured with ldap_attribute.


๐Ÿ” Step 3: Assign a Role to the RBAC User

curl -i -X POST http://localhost:8001/rbac/users/jdoe/roles \
  --data "roles=read-only"  # Or "admin", "super-admin", etc.

Available roles: read-only, admin, super-admin, or your own custom roles.


๐Ÿ”“ Step 4: Log into Kong Manager with LDAP User

Go to your Kong Manager GUI:

https://<KONG_MANAGER_URL>:8445

Enter:

  • Username: jdoe (LDAP uid)
  • Password: LDAP userโ€™s actual password (Kong will bind to LDAP and verify it)

๐Ÿ› ๏ธ Optional: Test LDAP Config from CLI

You can test the LDAP binding from Kong CLI:

curl -i -X POST http://localhost:8001/rbac/users \
  --data "name=testuser" \
  --data "user_token=test123"

Then try logging into Kong Manager with testuser using their LDAP password.


Kong logs (2 Zones, 4 Servers—-> Splunk)

In Your Setup:

each zone has its own shared DB:

  • Zone A (A1 & A2) โ†’ DB-A
  • Zone B (B1 & B2) โ†’ DB-B

That implies:

  • You need to configure the plugin in both DBs (once per zone).
  • So you only need to enable the plugin once per zone, using the Admin API on one node in each zone.

โœ… What You Should Do:

  1. Run this plugin setup command on one Kong node per zone (e.g., A1 and B1):

bash

CopyEdit

curl -i -X POST http://localhost:8001/plugins/ \

  –data “name=http-log” \

  –data “config.http_endpoint=https://splunk-hec.example.com:8088/services/collector” \

  –data “config.method=POST” \

  –data “config.headers[Authorization]=Splunk YOUR-HEC-TOKEN” \

  –data “config.queue.size=1000”

  1. Confirm it’s active via:

curl http://localhost:8001/plugins


๐Ÿ›ก๏ธ Bonus Tip: Tag Your Logs by Node/Zone

To make Splunk logs more useful, you can:

  • Add custom headers or query parameters with zone info.
  • Use a transform or custom_fields in Splunk to tag logs from Zone A vs B.

Example:

–data “config.headers[X-Kong-Zone]=zone-a”

convert .crt to .p12

ChatGPT said:

To convert a .crt file to a .p12 (PKCS#12) file, youโ€™ll need:

  • Your certificate file (your_cert.crt)
  • The private key file (your_key.key)
  • (Optional) The CA bundle file (ca_bundle.crt)

You can use the openssl command like this:

bashCopyEditopenssl pkcs12 -export \
  -in your_cert.crt \
  -inkey your_key.key \
  -out your_cert.p12 \
  -name "your_alias" \
  -certfile ca_bundle.crt

Explanation:

  • -in your_cert.crt: Your certificate
  • -inkey your_key.key: Your private key
  • -out your_cert.p12: Output .p12 file
  • -name "your_alias": Alias for the key in the keystore
  • -certfile ca_bundle.crt: (Optional) Chain/intermediate certificates

MarR disk – mapr_disk_cleanup_and_setup.yml

---
- name: Clean and prepare MapR disks
  hosts: mapr_nodes
  become: yes
  tasks:

    - name: Stop MapR Warden
      service:
        name: mapr-warden
        state: stopped

    - name: Unmount and wipe disks (/dev/sdb /dev/sdc /dev/sdd)
      shell: |
        for disk in /dev/sdb /dev/sdc /dev/sdd; do
          umount ${disk}* 2>/dev/null || true
          wipefs -a $disk
          sgdisk --zap-all $disk
        done
      ignore_errors: yes

    - name: Create disk list for MapR
      copy:
        dest: /tmp/disk.list
        content: |
          /dev/sdb
          /dev/sdc
          /dev/sdd

    - name: Run disksetup with -F (force)
      shell: /opt/mapr/server/disksetup -F /tmp/disk.list

    - name: Start MapR Warden
      service:
        name: mapr-warden
        state: started

    - name: Wait for Warden to stabilize
      wait_for:
        path: /opt/mapr/logs/warden.log
        search_regex: "Starting all services"
        timeout: 60

    - name: Verify MapR disk list
      shell: maprcli disk list
      register: disk_status

    - name: Show disk list output
      debug:
        var: disk_status.stdout

kong – failover test

On-Premises Failover Scenario โ€“ 4 Servers Across 2 Zones

Physical Layout

  • Zone A (Rack A): Server A1, Server A2
  • Zone B (Rack B): Server B1, Server B2
  • Each zone is in separate racks, power circuits, possibly even separate rooms/buildings (if possible).
  • Redundant network paths and power per zone.

โš™๏ธ Typical Architecture

  • HA Load Balancer: LVS / HAProxy / Keepalived / F5 (active-passive or active-active)
  • Heartbeat/Health Monitoring: Keepalived, Corosync, or Pacemaker
  • Shared State (optional): GlusterFS, DRBD, etcd, or replicated DB

๐Ÿ” Failover Scenarios

1. Single Server Failure (e.g., A1 goes down)

  • Load balancer marks A1 as unhealthy.
  • A2, B1, B2 continue serving traffic.
  • No impact to availability.

2. Zone Failure (e.g., entire Rack A fails โ€” A1 and A2)

  • Power/network failure in Zone A.
  • Load balancer detects both A1 and A2 as down.
  • All traffic is redirected to B1 and B2.
  • Ensure B1/B2 can handle the full load.

3. Intermittent Network Failure in One Zone

  • Heartbeat may detect nodes as “split”.
  • Use quorum-based or fencing mechanisms (STONITH) to avoid split-brain.
  • Pacemaker/Corosync can help in cluster management and decision-making.

4. Load Balancer Node Fails

  • Use HA load balancer pair with VRRP (Keepalived) or hardware failover.
  • Virtual IP (VIP) is moved to the standby node.

5. Storage or DB Node Fails

  • If using shared storage or clustered DBs:
    • Ensure data replication (synchronous if possible).
    • Use quorum-aware systems (odd number of nodes ideal, maybe an external arbitrator).
    • DRBD or GlusterFS with quorum can help avoid data corruption.

Kong – generate client cert

When generating a client certificate for Kong, you generally need to provide the .crt and .key files to the client. However, the .pem file can also be used, depending on the application’s needs.

Hereโ€™s how each file is used:

  1. .crt (Certificate File) โ€“ This contains the public certificate of the client.
  2. .key (Private Key File) โ€“ This holds the private key for the client.
  3. .pem (Privacy-Enhanced Mail Format) โ€“ This can contain both the certificate and private key (and sometimes even intermediate certificates) in a single file.

What Should You Provide to the Client?

  • If the client explicitly needs separate certificate and key files, provide:
    • client.crt
    • client.key
  • If the client can handle a single PEM file, provide:
    • client.pem (which includes both the certificate and private key)

To generate a PEM file from .crt and .key:

cat client.crt client.key > client.pem

๐Ÿ”น Use Case:

  • Some applications and libraries (e.g., cURL, OpenSSL, and certain API clients) accept a single PEM file instead of separate .crt and .key files.
  • If the client is using mutual TLS (mTLS) authentication with Kong, check if they need a .pem file instead.

Install plugins

---
- name: Install Specified Kong Plugins Globally
  hosts: localhost
  tasks:
    - name: Define Kong Plugins with Configurations
      set_fact:
        kong_plugins:
          - name: "correlation-id"
            config:
              header_name: "Kong-Request-ID"
              generator: "uuid"
              echo_downstream: true
          - name: "http-log"
            config:
              http_endpoint: "http://your-logging-service.local/logs"
              method: "POST"
              timeout: 10000
              keepalive: 60000
          - name: "jwt"
            config: {}
          - name: "ldap-auth-advanced"
            config:
              ldap_host: "ldap.yourdomain.com"
              ldap_port: 389
              start_tls: false
              verify_ldap_host: true
              base_dn: "dc=yourdomain,dc=com"
              attribute: "uid"
              cache_ttl: 60
              timeout: 10000
              keepalive: 60000
          - name: "mtls-auth"
            config:
              ca_certificates: ["<CA_CERTIFICATE>"]
              depth: 1
              cache_ttl: 60
          - name: "openid-connect"
            config:
              issuer: "https://your-oidc-provider.com"
              client_id: "your-client-id"
              client_secret: "your-client-secret"
              redirect_uri: "https://your-kong-instance.com/redirect"
              scopes: ["openid", "profile", "email"]
          - name: "request-termination"
            config:
              status_code: 403
              message: "Forbidden"
              content_type: "text/plain"

    - name: Ensure Specified Plugins Are Installed Globally
      uri:
        url: "http://localhost:8001/plugins"
        method: POST
        body_format: json
        body:
          name: "{{ plugin.name }}"
          config: "{{ plugin.config | default({}) }}"
        status_code: [201, 409]  # 201 = Created, 409 = Conflict (Already Exists)
      loop: "{{ kong_plugins }}"
      loop_control:
        loop_var: plugin
      register: plugin_response
      ignore_errors: yes

    - name: Debug Plugin Deployment Responses
      debug:
        var: plugin_response.results

ansible playbook to install MapR

-

inventory file

---
# mapr_cluster_installation.yml
# Ansible playbook to install MapR 7.8 on a 3-node on-premises cluster

- name: Prepare all nodes for MapR 7.8 installation
  hosts: mapr_cluster
  become: yes
  gather_facts: yes
  vars:
    mapr_version: "7.8.0"
    mapr_mep_version: "10.1.0"
    java_version: "java-11-openjdk"
    cluster_name: "mapr78cluster"
  tasks:
    - name: Update apt cache
      apt:
        update_cache: yes
      when: ansible_os_family == "Debian"

    - name: Update yum cache
      yum:
        update_cache: yes
      when: ansible_os_family == "RedHat"

    - name: Install required packages
      package:
        name: "{{ item }}"
        state: present
      loop:
        - "{{ java_version }}"
        - "{{ java_version }}-devel"
        - ntp
        - openssh-server
        - sysstat
        - wget
        - net-tools
        - tar
        - lsof
        - python3
        - chrony
        - dnsmasq

    - name: Set system limits for MapR
      pam_limits:
        domain: '*'
        limit_type: "{{ item.limit_type }}"
        limit_item: "{{ item.limit_item }}"
        value: "{{ item.value }}"
      loop:
        - { limit_type: 'soft', limit_item: 'nofile', value: '64000' }
        - { limit_type: 'hard', limit_item: 'nofile', value: '64000' }
        - { limit_type: 'soft', limit_item: 'nproc', value: '64000' }
        - { limit_type: 'hard', limit_item: 'nproc', value: '64000' }
        - { limit_type: 'soft', limit_item: 'stack', value: 'unlimited' }
        - { limit_type: 'hard', limit_item: 'stack', value: 'unlimited' }

    - name: Disable SELinux
      selinux:
        state: disabled
      when: ansible_os_family == "RedHat"

    - name: Disable firewall
      service:
        name: "{{ firewall_service }}"
        state: stopped
        enabled: no
      vars:
        firewall_service: "{{ 'ufw' if ansible_os_family == 'Debian' else 'firewalld' }}"

    - name: Set kernel parameters for MapR
      sysctl:
        name: "{{ item.name }}"
        value: "{{ item.value }}"
        state: present
        reload: yes
      loop:
        - { name: 'vm.swappiness', value: '1' }
        - { name: 'net.core.somaxconn', value: '4096' }
        - { name: 'net.ipv4.tcp_fin_timeout', value: '30' }
        - { name: 'net.ipv4.tcp_keepalive_time', value: '60' }
        - { name: 'net.ipv4.tcp_keepalive_intvl', value: '10' }
        - { name: 'net.ipv4.tcp_keepalive_probes', value: '3' }
        - { name: 'vm.overcommit_memory', value: '0' }

    - name: Disable transparent hugepages
      shell: |
        echo never > /sys/kernel/mm/transparent_hugepage/enabled
        echo never > /sys/kernel/mm/transparent_hugepage/defrag
      args:
        warn: no

    - name: Make transparent hugepages setting persistent
      lineinfile:
        path: /etc/rc.local
        line: "echo never > /sys/kernel/mm/transparent_hugepage/enabled\necho never > /sys/kernel/mm/transparent_hugepage/defrag"
        create: yes
        mode: 0755

    - name: Add MapR 7.8 repository for RedHat
      yum_repository:
        name: mapr-core
        description: MapR 7.8.0 Core Repository
        baseurl: https://package.mapr.hpe.com/releases/v7.8.0/redhat/
        gpgcheck: no
        enabled: yes
      when: ansible_os_family == "RedHat"

    - name: Add MapR 7.8 MEP repository for RedHat
      yum_repository:
        name: mapr-mep
        description: MapR 7.8.0 MEP Repository
        baseurl: https://package.mapr.hpe.com/releases/MEP/MEP-{{ mapr_mep_version }}/redhat/
        gpgcheck: no
        enabled: yes
      when: ansible_os_family == "RedHat"
      
    - name: Add MapR 7.8 ecosystem repository for RedHat
      yum_repository:
        name: mapr-ecosystem
        description: MapR 7.8.0 Ecosystem Repository
        baseurl: https://package.mapr.hpe.com/releases/ecosystem-{{ mapr_version }}/redhat/
        gpgcheck: no
        enabled: yes
      when: ansible_os_family == "RedHat"

    - name: Add MapR 7.8 repository for Debian
      apt_repository:
        repo: deb https://package.mapr.hpe.com/releases/v7.8.0/ubuntu binary focal
        state: present
      when: ansible_os_family == "Debian"

    - name: Add MapR 7.8 MEP repository for Debian
      apt_repository:
        repo: deb https://package.mapr.hpe.com/releases/MEP/MEP-{{ mapr_mep_version }}/ubuntu binary focal
        state: present
      when: ansible_os_family == "Debian"
      
    - name: Add MapR 7.8 ecosystem repository for Debian
      apt_repository:
        repo: deb https://package.mapr.hpe.com/releases/ecosystem-{{ mapr_version }}/ubuntu binary focal
        state: present
      when: ansible_os_family == "Debian"

    - name: Create MapR user
      user:
        name: mapr
        createhome: yes
        shell: /bin/bash

    - name: Set password for MapR user
      user:
        name: mapr
        password: "{{ 'mapr123' | password_hash('sha512') }}"

    - name: Create MapR install directory
      file:
        path: /opt/mapr
        state: directory
        owner: mapr
        group: mapr
        mode: 0755

    - name: Setup Chrony
      template:
        src: chrony.conf.j2
        dest: /etc/chrony.conf
      notify: restart chrony
      when: inventory_hostname in groups['mapr_ntp']

    - name: Start Chrony service
      service:
        name: chronyd
        state: started
        enabled: yes

  handlers:
    - name: restart chrony
      service:
        name: chronyd
        state: restarted

- name: Install MapR Core on all nodes
  hosts: mapr_cluster
  become: yes
  tasks:
    - name: Install MapR core packages
      package:
        name: "{{ item }}"
        state: present
      loop:
        - mapr-core
        - mapr-fileserver

    - name: Install ZooKeeper package
      package:
        name: mapr-zookeeper
        state: present
      when: inventory_hostname in groups['mapr_zookeeper']

- name: Configure CLDB nodes
  hosts: mapr_cldb
  become: yes
  tasks:
    - name: Install CLDB package
      package:
        name: mapr-cldb
        state: present

- name: Configure MapR control system node
  hosts: mapr_webserver
  become: yes
  tasks:
    - name: Install MapR Control System (MCS)
      package:
        name: "{{ item }}"
        state: present
      loop:
        - mapr-webserver
        - mapr-apiserver
        - mapr-adminui

    - name: Install MapR Monitoring components
      package:
        name: "{{ item }}"
        state: present
      loop:
        - mapr-grafana
        - mapr-collectd
        - mapr-opentsdb

- name: Install additional components on appropriate nodes
  hosts: mapr_cluster
  become: yes
  tasks:
    - name: Install Hadoop ecosystem components
      package:
        name: "{{ item }}"
        state: present
      loop:
        - mapr-hadoop-core
        - mapr-hadoop-client
      when: inventory_hostname in groups['mapr_hadoop']

    - name: Install Hive components
      package:
        name: "{{ item }}"
        state: present
      loop:
        - mapr-hive
        - mapr-hiveserver2
        - mapr-hivemetastore
      when: inventory_hostname in groups['mapr_hive']

    - name: Install Spark components
      package:
        name: "{{ item }}"
        state: present
      loop:
        - mapr-spark
        - mapr-spark-historyserver
      when: inventory_hostname in groups['mapr_spark']

    - name: Install Drill components
      package:
        name: mapr-drill
        state: present
      when: inventory_hostname in groups['mapr_drill']

    - name: Install Hbase components
      package:
        name: "{{ item }}"
        state: present
      loop:
        - mapr-hbase
        - mapr-hbasethrift
      when: inventory_hostname in groups['mapr_hbase']

    - name: Install data science components
      package:
        name: "{{ item }}"
        state: present
      loop:
        - mapr-data-access-gateway
        - mapr-asynchbase
      when: inventory_hostname in groups['mapr_datascience']

- name: Configure disks for MapR-FS
  hosts: mapr_cluster
  become: yes
  tasks:
    - name: Create disk list file
      copy:
        content: |
          {% for disk in mapr_disks %}
          {{ disk }} -force
          {% endfor %}
        dest: /tmp/disks.txt
      vars:
        mapr_disks:
          - /dev/sdb
          - /dev/sdc
      
    - name: Initialize the disks for MapR-FS
      shell: /opt/mapr/server/disksetup -F /tmp/disks.txt
      args:
        creates: /opt/mapr/logs/disksetup.log

- name: Configure and start the MapR cluster
  hosts: mapr_cldb[0]
  become: yes
  vars:
    cluster_name: "mapr78cluster"
    mapr_security: "disabled"  # Change to "enabled" for secure clusters
  tasks:
    - name: Configure the cluster
      shell: >
        /opt/mapr/server/configure.sh -N {{ cluster_name }} 
        -C {{ groups['mapr_cldb'] | join(',') }} 
        -Z {{ groups['mapr_zookeeper'] | join(',') }} 
        -no-autostart
        {% if mapr_security == "enabled" %}
        -secure -dare
        {% endif %}
      args:
        creates: /opt/mapr/conf/mapr-clusters.conf

    - name: Start ZooKeeper services on ZooKeeper nodes
      shell: /opt/mapr/server/maprcli node services -name zookeeper -action start
      register: start_zk
      failed_when: start_zk.rc != 0 and "already running" not in start_zk.stderr
      
    - name: Wait for ZooKeeper to start
      pause:
        seconds: 30
        
    - name: Start CLDB services on CLDB nodes
      shell: /opt/mapr/server/maprcli node services -name cldb -action start
      register: start_cldb
      failed_when: start_cldb.rc != 0 and "already running" not in start_cldb.stderr
      
    - name: Wait for CLDB to start
      pause:
        seconds: 60
        
    - name: Start MFS services
      shell: /opt/mapr/server/maprcli node services -name fileserver -action start
      register: start_services
      failed_when: start_services.rc != 0 and "already running" not in start_services.stderr

    - name: Start API server services
      shell: /opt/mapr/server/maprcli node services -name apiserver -action start
      register: start_apiserver
      failed_when: start_apiserver.rc != 0 and "already running" not in start_apiserver.stderr
      ignore_errors: yes

    - name: Start webserver services
      shell: /opt/mapr/server/maprcli node services -name webserver -action start
      register: start_webserver
      failed_when: start_webserver.rc != 0 and "already running" not in start_webserver.stderr
      ignore_errors: yes

    - name: Start additional services
      shell: /opt/mapr/server/maprcli node services -name drill,hivemeta,hs2 -action start
      ignore_errors: yes

- name: Configure monitoring services
  hosts: mapr_webserver
  become: yes
  tasks:
    - name: Start monitoring services
      shell: /opt/mapr/server/maprcli node services -name collectd,opentsdb,grafana -action start
      ignore_errors: yes

- name: Configure Data Access Gateway
  hosts: mapr_datascience
  become: yes
  tasks:
    - name: Start Data Access Gateway
      shell: /opt/mapr/server/maprcli node services -name mapr-gateway -action start
      ignore_errors: yes

- name: Verify installation
  hosts: mapr_cldb[0]
  become: yes
  tasks:
    - name: Check cluster status
      shell: /opt/mapr/server/maprcli dashboard info -json
      register: cluster_status
      become_user: mapr

    - name: Display cluster status
      debug:
        var: cluster_status.stdout
        
    - name: Check node status
      shell: /opt/mapr/server/maprcli node list -json
      register: node_status
      become_user: mapr

    - name: Display node status
      debug:
        var: node_status.stdout

Key updates for MapR 7.8 compared to 7.7:

  1. Updated to MEP 10.1.0 (from 9.1.0 in 7.7)
  2. Added proper time synchronization with Chrony (preferred over NTP in newer deployments)
  3. Added the new API Server component that’s required in MapR 7.8
  4. Added Data Science components including the Data Access Gateway
  5. Improved system parameters tuning with more optimized values for 7.8
  6. Updated paths to use /opt/mapr/server/maprcli instead of just maprcli for more reliability
  7. Added additional system limits for MapR 7.8’s resource requirements
  8. Added node status verification at the end of installation

Before running the playbook:

  1. Adjust the disk definitions in the “Configure disks for MapR-FS” task to match your actual disk configuration
  2. Set the cluster_name variable to your desired cluster name
  3. Update mapr_security to “enabled” if you want a secure cluster
  4. Create the chrony.conf.j2 template file in the same directory as your playbook

Run the playbook with:

bashCopyansible-playbook -i inventory.ini mapr_cluster_installation.yml

This playbook should successfully install MapR 7.8 on your 3-node cluster with all the necessary components and optimizations.Retry

[mapr_cluster:children]
mapr_cldb
mapr_zookeeper
mapr_webserver
mapr_hadoop
mapr_hbase
mapr_hive
mapr_spark
mapr_drill
mapr_datascience
mapr_ntp

[mapr_cldb]
node1.example.com
node2.example.com

[mapr_zookeeper]
node1.example.com
node2.example.com
node3.example.com

[mapr_webserver]
node1.example.com

[mapr_hadoop]
node1.example.com
node2.example.com
node3.example.com

[mapr_hbase]
node2.example.com
node3.example.com

[mapr_hive]
node2.example.com

[mapr_spark]
node3.example.com

[mapr_drill]
node3.example.com

[mapr_datascience]
node2.example.com

[mapr_ntp]
node1.example.com

The mapr_datascience group in the playbook refers to nodes that will have data science components installed. In the MapR 7.8 context, this includes:

  1. mapr-data-access-gateway – This is a key component in MapR 7.8 that provides RESTful API access to MapR services and data. It allows data scientists and applications to interact with MapR data through REST APIs, which is particularly useful for:
    • Integration with Python, R, and other data science tools
    • Building web applications that need to access MapR data
    • Supporting microservices architecture that interacts with MapR
  2. mapr-asynchbase – This is an asynchronous HBase client library that provides non-blocking access to HBase. It’s particularly useful for:
    • High-throughput applications that need to interact with HBase
    • Applications that need to perform many operations in parallel
    • Avoiding blocking I/O in data science applications

These packages create an environment more suitable for data scientists to work with MapR data using their preferred tools. The Data Access Gateway in particular is becoming increasingly important in modern MapR deployments as it enables easier integration with modern data science tools and workflows.

Ansible playbook to install Java

---
- name: Install Java on Red Hat 8 Nodes
  hosts: redhat_nodes
  become: yes
  tasks:
    - name: Ensure the system is up to date
      yum:
        name: '*'
        state: latest

    - name: Install OpenJDK 11
      yum:
        name: java-11-openjdk
        state: present

    - name: Install OpenJDK 11 Development Kit
      yum:
        name: java-11-openjdk-devel
        state: present

    - name: Set JAVA_HOME environment variable
      lineinfile:
        path: /etc/profile
        line: 'export JAVA_HOME=/usr/lib/jvm/java-11-openjdk'
        create: yes

    - name: Source /etc/profile to apply JAVA_HOME
      command: source /etc/profile


python3 -c "import jmespath; print(jmespath.__version__)"