Steps to install HPE Ezmeral 7.x on Linux cluster

Installing HPE Ezmeral Data Fabric (formerly MapR) version 7.x on a 12-node Linux cluster requires planning and configuration. Here are the detailed steps to install and configure the cluster:


Step 1: Prerequisites

  1. System Requirements:
    • 64-bit Linux (RHEL/CentOS 7 or 8, or equivalent).
    • Minimum hardware for each node:
      • Memory: At least 16GB RAM.
      • CPU: Quad-core or higher.
      • Disk: Minimum of 500GB of storage.
  2. Network Configuration:
    • Assign static IP addresses or hostnames to all 12 nodes.
    • Configure DNS or update /etc/hosts with the IP and hostname mappings.
    • Ensure nodes can communicate with each other via SSH.
  3. Users and Permissions:
    • Create a dedicated user for HPE Ezmeral (e.g., mapr).
    • Grant the user passwordless SSH access across all nodes.
  4. Firewall and SELinux:
    • Disable or configure the firewall to allow required ports.
    • Set SELinux to permissive mode:

sudo setenforce 0

sudo sed -i ‘s/^SELINUX=.*/SELINUX=permissive/’ /etc/selinux/config

  1. Java Installation:
    • Install Java (OpenJDK 11 recommended):

sudo yum install java-11-openjdk -y


Step 2: Download HPE Ezmeral Data Fabric Software

  1. Obtain Software:
    • Download the HPE Ezmeral 7.x installation packages from the official HPE Ezmeral website.
  2. Distribute Packages:
    • Copy the packages to all 12 nodes using scp or a similar tool.

Step 3: Install Core Services

  1. Install the Core Packages:
    • On each node, install the required packages:

sudo yum install mapr-core mapr-fileserver mapr-cldb mapr-webserver -y

  1. Install Additional Services:
    • Based on your use case, install additional packages (e.g., mapr-zookeeper, mapr-nodemanager, etc.).

Step 4: Configure ZooKeeper

  1. Select ZooKeeper Nodes:
  2. Choose three nodes to run the ZooKeeper service (e.g., node1, node2, node3).
  3. Edit the ZooKeeper Configuration:
  4. Update the ZooKeeper configuration file (/opt/mapr/zookeeper/zookeeper-<version>/conf/zoo.cfg) on the ZooKeeper nodes:

tickTime=2000

dataDir=/var/mapr/zookeeper

clientPort=2181

initLimit=5

syncLimit=2

server.1=node1:2888:3888

server.2=node2:2888:3888

server.3=node3:2888:3888

  1. Initialize ZooKeeper:
    • On each ZooKeeper node, create a myid file:

echo “1” > /var/mapr/zookeeper/myid  # Replace with 2 or 3 for other nodes

  1. Start ZooKeeper:

sudo systemctl start mapr-zookeeper


Step 5: Configure the Cluster

  1. Initialize the Cluster:
    • Run the cluster initialization command from one node:

/opt/mapr/server/configure.sh -C node1,node2,node3 -Z node1,node2,node3

  • Replace node1,node2,node3 with the actual hostnames of the CLDB and ZooKeeper nodes.
  1. Verify Installation:
    • Check the cluster status:

maprcli cluster info

  1. Add Nodes to the Cluster:
    • On each additional node, configure it to join the cluster:

/opt/mapr/server/configure.sh -N <cluster_name> -C node1,node2,node3 -Z node1,node2,node3


Step 6: Start Core Services

  1. Start CLDB:
    • Start the CLDB service on the designated nodes:

sudo systemctl start mapr-cldb

  1. Start FileServer and WebServer:
    • Start the file server and web server services on all nodes:

sudo systemctl start mapr-fileserver

sudo systemctl start mapr-webserver

  1. Start Node Manager:
  2. If using YARN, start the Node Manager service on all nodes:

sudo systemctl start mapr-nodemanager


Step 7: Post-Installation Steps

  1. Access the Web Interface:
    • Open a browser and go to the web interface of your cluster:

http://<CLDB-node-IP&gt;:8443

  • Log in using the mapr user credentials.
  1. Add Storage:
    • Add storage disks to the cluster using the web interface or CLI:

maprcli disk list

maprcli disk add -all

  1. Secure the Cluster:
    • Enable Kerberos or LDAP integration for authentication.
    • Configure SSL/TLS for secure communication.

Step 8: Verify the Cluster

  1. Check Services:
    • Verify that all services are running:

maprcli node list -columns svc

  1. Run a Test:
  2. Create a directory in the file system and test file creation:

hadoop fs -mkdir /test

hadoop fs -put /path/to/local/file /test

Leave a comment