Install PHD-3.0.1.0 – using ambari

Steps

1. First install ambari

Setup ambari repo ( if you dont have access to internet you can setup a local one )

[root@sphdmst01 AMBARI-1.7.1]# ambari-server setup
Using python  /usr/bin/python2.6
Setup ambari-server
Checking SELinux…
SELinux status is ‘disabled’
Ambari-server daemon is configured to run under user ‘root’. Change this setting [y/n] (n)?
Adjusting ambari-server permissions and ownership…
Checking firewall…
Checking JDK…
Do you want to change Oracle JDK [y/n] (n)?
Completing setup…
Configuring database…
Enter advanced database configuration [y/n] (n)? y
==============================================================================
Choose one of the following options:
[1] – PostgreSQL (Embedded)
[2] – Oracle
[3] – MySQL
[4] – PostgreSQL
==============================================================================
Enter choice (1): 4  — supose to be 1
Hostname (localhost):
Port (5432):
Database Name (ambari):
Postgres schema (ambari):
Username (ambari):
Enter Database Password (bigdata):
Configuring remote database connection properties…
WARNING: Before starting Ambari Server, you must run the following DDL against the database to create the schema: /var/lib/ambari-server/resources/Ambari-DDL-Postgres-CREATE.sql
Proceed with configuring remote database connection properties [y/n] (y)? y
Extracting system views…
..ambari-admin-1.7.1.88.jar

Adjusting ambari-server permissions and ownership…
Ambari Server ‘setup’ completed successfully.

#ambari-server start

access ambari via web browser

http://localhost:8080

 

 

Setup PHD repo

[root@sphdmst01 PHD-3.0.1.0]# ./setup_repo.sh
PHD-3.0.1.0 Repo file successfully created at /etc/yum.repos.d/PHD-3.0.1.0.repo.
Use http://sphdmst01.mydev.com/PHD-3.0.1.0 to access the repository.

[root@sphdmst01 PHD-UTILS-1.1.0.20]# ./setup_repo.sh
PHD-UTILS-1.1.0.20 Repo file successfully created at /etc/yum.repos.d/PHD-UTILS-1.1.0.20.repo.
Use http://sphdmst01.mydev.com/PHD-UTILS-1.1.0.20 to access the repository.

 

View Repos

[root@sphdmst01 yum.repos.d]# ls -latr
total 40
-rw-r–r–    1 root root   529 Sep 15  2014 rhel-source.repo
-rw-r–r–    1 root root  5636 Aug  1 18:08 redhat.repo
-rw-r–r–    1 root root   101 Sep 15 12:29 ambari.repo
-rw-r–r–    1 root root    98 Sep 17 10:10 PHD-3.0.1.0.repo
drwxr-xr-x. 113 root root 12288 Sep 17 10:17 ..
-rw-r–r–    1 root root   119 Sep 17 10:27 PHD-UTILS-1.1.0.20.repo
drwxr-xr-x.   2 root root  4096 Sep 17 10:27 .

[root@sphdmst01 yum.repos.d]# more PHD-3.0.1.0.repo
[PHD-3.0.1.0]
name=PHD-3.0.1.0
baseurl=http://sphdmst01.mydev.com/PHD-3.0.1.0
gpgcheck=0

[root@sphdmst01 yum.repos.d]# more ambari.repo
[AMBARI-1.7.1]
name=AMBARI-1.7.1
baseurl=http://sphdmst01.mydev.com/AMBARI-1.7.1
gpgcheck=0

[root@cmtolsphdmst01 yum.repos.d]# more PHD-UTILS-1.1.0.20.repo
[PHD-UTILS-1.1.0.20]
name=PHD-UTILS-1.1.0.20
baseurl=http://sphdmst01.mydev.com/PHD-UTILS-1.1.0.20
gpgcheck=0

Enable security

Admin–> Security  and Enable security

Get Started

Important: Before configuring Ambari to manage your Kerberos-enabled cluster, you must perform the
following manual steps on your cluster. Be sure to record the location of the keytab files for each
host and the principals for each Hadoop service. This information is required in order to use the wizard.

1.Install, configure and start your Kerberos KDC
2.Install and configure the Kerberos client on every host in the cluster
3.Create Kerberos principals for Hadoop services and hosts
4.Generate keytabs for each principal and place on the appropriate hosts

check ambari agent status

[root@sphdmst02 ~]# ambari-agent status
Found ambari-agent PID: 476715
ambari-agent running.
Agent PID at: /var/run/ambari-agent/ambari-agent.pid
Agent out at: /var/log/ambari-agent/ambari-agent.out
Agent log at: /var/log/ambari-agent/ambari-agent.log

 

ansible playbook

[root@ ansible]# more sendmail.yml

– hosts: hadoop_sit
user: root
tasks:
– name: 1. Install sendmail
yum:  name=sendmail state=latest

– name: 2. Start sendmail
service: name=sendmail  state=running enabled=yes

simple copy file playbock

Command to run ansible playbook
1
ansible-playbook demo.yml
where ‘demo.yml’ is playbook name

Dry Run mode
1
ansible-playbook demo.yml –check

[ovi@~]$ more tsm.yml

– hosts: endur_dev
tasks:
– name: Copy the file
copy: src=/tmp/file.txt dest=/tmp/file.txt

[ovi@ ~]$ ansible-playbook tsm.yml –ask-pass
SSH password:

PLAY [endur_dev] **************************************************************

GATHERING FACTS ***************************************************************
ok: [ndora01.uat.my.com]
ok: [ndora01.dev.my.com]

TASK: [Copy the file] *********************************************************
changed: [ndora01.dev.my.com]
changed: [ndora01.uat.my.com]

PLAY RECAP ********************************************************************
ndora01.dev.bmocm.com : ok=2 changed=1 unreachable=0 failed=0
ndora01.uat.bmocm.com : ok=2 changed=1 unreachable=0 failed=0

another simple playbook

[ovi@ ~]$ more tsm2.yml

– hosts: endur_dev
tasks:
– name: Copy the file
copy: src=/tmp/file1.txt dest=/tmp/file1.txt

– name: Copy second file
copy: src=/tmp/file2.txt dest=/tmp/file2.txt

ansible

Ansible’s Feature:

  • Accessed mostly through SSH ( it also has a paraminko and local modes)
  • Based on an agent less architecture
  • Have more than 200 built-in modules
  • No custom infrastructure required
  • Configuration (module, playbook) written in the easy to use YML format
  • Ansible interacts with its clients either through playbooks or a command line tool ( ad-hoc command)

PARAMINKOhigh-quality Python implementation of OpenSSH

Ansible components 

  • Inventory
  • Playbooks
    • Play
    • Tasks
    • Roles
    • Handlers
    • Templates
    • Variables

 

Example Ad-Hoc commands

To transfer a file directly to many servers:

$ ansible hadoop  -m copy -a “src=/etc/hosts dest=/tmp/hosts”

To ping the servers

[ovi@ ~]$  ansible last_bpm -m ping  –ask-pass
SSH password:
192.168.18.207 | success >> {
“changed”: false,
“ping”: “pong”
}

192.168.18.208 | success >> {
“changed”: false,
“ping”: “pong”
}

192.168.18.206 | success >> {
“changed”: false,
“ping”: “pong”
}

Run ansible ad-hoc command to check OS

[root@ ansible]# ansible hadoop_dev -m command -a “uname -a” –ask-pass
SSH password:
192.168.68.119 | success | rc=0 >>
Linux dphdmst04 2.6.32-504.23.4.el6.x86_64 #1 SMP Fri May 29 10:16:43 EDT 2015 x86_64 x86_64 x86_64 GNU/Linux

192.168.68.118 | success | rc=0 >>
Linux dphdmst03 2.6.32-504.23.4.el6.x86_64 #1 SMP Fri May 29 10:16:43 EDT 2015 x86_64 x86_64 x86_64 GNU/Linux

192.168.68.117 | success | rc=0 >>
Linux dphdmst02 2.6.32-504.23.4.el6.x86_64 #1 SMP Fri May 29 10:16:43 EDT 2015 x86_64 x86_64 x86_64 GNU/Linux

192.168.68.116 | success | rc=0 >>
Linux dphdmst01 2.6.32-504.23.4.el6.x86_64 #1 SMP Fri May 29 10:16:43 EDT 2015 x86_64 x86_64 x86_64 GNU/Linux

Restart ntpd service on one server

# ansible 192.168.129.61 -m service -a “name=ntpd state=restarted” -k

SSH password:

192.168.129.61 | success >> {

“changed”: true,

“name”: “ntpd”,

“state”: “started”

}

[root@]# ansible hadoop -m shell -a “ps -e -o pcpu,pid,user,args|sort -k1 -nr|head -1” -k

SSH password:

192.236.1.52 | success | rc=0 >>

99.9 29068 root     python /usr/bin/goferd

192.236.1.56 | success | rc=0 >>

12.9   7321 root     /opt/microsoft/configmgr/bin/ccmexec.binsort: write failed:

192.236.1.53 | success | rc=0 >>

99.9 543544 root     python /usr/bin/goferd

192.236.1.54 | success | rc=0 >>

99.2 567756 root     python /usr/bin/goferd

192.236.1.55 | success | rc=0 >>

8.0 65506 root     sshd: root@notty

192.236.1.57 | success | rc=0 >>

15.6   7260 root     /opt/microsoft/configmgr/bin/ccmexec.binsort: write failed: standard output: Broken pipe

192.236.1.58 | success | rc=0 >>

100 491426 root     python /usr/bin/goferdsort: fflush failed: standard output: Broken

192.236.1.59 | success | rc=0 >>

100 485478 root     python /usr/bin/goferd

192.236.1.61 | success | rc=0 >>

100 463591 root     python /usr/bin/goferdsort: fflush failed: standard output: Broken pipe

192.236.1.60 | success | rc=0 >>

20.8   7263 root     /opt/microsoft/configmgr/bin/ccmexec.binsort: fflush failed:

manage services

# ansible hadoop_prod -m service -a “name=goferd state=restarted” -k
SSH password:

192.236.1.56 | success >> {
“changed”: true,
“name”: “goferd”,
“state”: “started”
}

192.236.1.52 | success >> {
“changed”: true,
“name”: “goferd”,
“state”: “started”
}

192.236.1.53 | success >> {
“changed”: true,
“name”: “goferd”,
“state”: “started”
}

192.236.1.54 | success >> {
“changed”: true,
“name”: “goferd”,
“state”: “started”
}

192.236.1.57 | success >> {
“changed”: true,
“name”: “goferd”,
“state”: “started”
}

192.236.1.58 | success >> {
“changed”: true,
“name”: “goferd”,
“state”: “started”
}

192.236.1.60 | success >> {
“changed”: true,
“name”: “goferd”,
“state”: “started”
}

192.236.1.61 | success >> {
“changed”: true,
“name”: “goferd”,
“state”: “started”
}

192.236.1.59 | success >> {
“changed”: true,
“name”: “goferd”,
“state”: “started”
}

gathering facts

[ovi ~]$  ansible last_bpm -a “free -m” –ask-pass
SSH password:
192.168.18.207 | success | rc=0 >>
total       used       free     shared    buffers     cached
Mem:         15951       3823      12128          0        237       1199
-/+ buffers/cache:       2385      13565
Swap:        16383          0      16383

192.168.18.208 | success | rc=0 >>
total       used       free     shared    buffers     cached
Mem:         15951       2540      13410          0        214        261
-/+ buffers/cache:       2063      13887
Swap:        16383          0      16383

192.168.18.206 | success | rc=0 >>
total       used       free     shared    buffers     cached
Mem:         15951       1245      14706          0        168        216
-/+ buffers/cache:        860      15090
Swap:        16383          0      16383

[ovi ~]$ ansible hadoop_prod -m file -a “dest=/tmp/testansible mode=644 owner=ovi group=130 state=directory” –ask-pass

192.236.1.60 | success >> {
“changed”: true,
“gid”: 130,
“group”: “sysadmin”,
“mode”: “0644”,
“owner”: “ovi”,
“path”: “/tmp/testansible”,
“size”: 4096,
“state”: “directory”,
“uid”: 275
}

192.236.1.61 | success >> {
“changed”: true,
“gid”: 130,
“group”: “sysadmin”,
“mode”: “0644”,
“owner”: “ovi”,
“path”: “/tmp/testansible”,
“size”: 4096,
“state”: “directory”,
“uid”: 275
}

$ ansible all -m setup

[ovi@C ~]$ ansible endur_dev -a “sudo yum list openssh” –ask-pass
SSH password:
dora01.uat.my.com | success | rc=0 >>
Loaded plugins: package_upload, product-id, security, subscription-manager
Available Packages
openssh.x86_64 5.3p1-112.el6_7 rhel-6-server-rpms

dora02.dev.my.com | success | rc=0 >>
Loaded plugins: package_upload, product-id, security, subscription-manager
Available Packages
openssh.x86_64 5.3p1-112.el6_7 rhel-6-server-rpms

copy a file to server

[ovi@ ~]$ ansible endur_dev -m copy -a “src=/etc/hosts dest=/tmp/hosts” –ask-pass
SSH password:
dora01.dev.my.com | success >> {
“changed”: true,
“dest”: “/tmp/hosts”,
“gid”: 130,
“group”: “sysadmin”,
“md5sum”: “bf17964d25f8802d53ee22d97edb8d4e”,
“mode”: “0644”,
“owner”: “oasimin”,
“size”: 132,
“src”: “/tmp/ansible-1456345034.31-198250330572361/source”,
“state”: “file”,
“uid”: 275
}

dora02.uat.my.com | success >> {
“changed”: true,
“dest”: “/tmp/hosts”,
“gid”: 130,
“group”: “sysadmin”,
“md5sum”: “bf17964d25f8802d53ee22d97edb8d4e”,
“mode”: “0644”,
“owner”: “oasimin”,
“size”: 132,
“src”: “/tmp/ansible-1456345034.3-35469159385891/source”,
“state”: “file”,
“uid”: 275
}

 

Command to run ansible playbook

1
ansible-playbook ovi.yml

where ‘ovi.yml’ is playbook name

Dry Run mode

1
ansible-playbook ovi.yml --check