Bring up a Hadoop Cluster using Ambari

I remember, couple of weeks back one of my colleagues and I, were trying to bring up a 3-node Hadoop cluster using Cloudera. The experience was painful, to say the least. We solved quite a few intermittent issues, and thought the end was nigh, but unfortunately it wasn’t.  Finally, we gave up after 2-3 days of struggling, and it had left a bitter taste in our mouth. I don’t really know if it was because of our lack of experience, or maybe the software itself left something to be desired, in terms of the ease of use. Anyway, so after that experience, I fell back on a local Hadoop instance on my laptop for executing small scale experiments. Things were going smooth for a while, and then I bought a book, called Hadoop 2 Quick Start Guide. This book had a clear set of instructions on how to deploy Hadoop on Ambari. Those of you, who’re not familiar with Ambari, it’s a software project to provision, manage and monitor a Hadoop cluster.

I followed the procedures, with some minor tweaks here and there, to get around the problems I faced during installation. In the end I could actually bring up a 3-node Hadoop cluster. It took around 6 hrs. of struggling to make sure everything was up and running. I’m writing down the procedures I followed, step by step, so that you don’t have to go through the same set of troubles that I had to. It’s gonna be a long ride, and might take a while to digest the whole thing, so let’s buckle up.

Things to note before we proceed

I’ll be assuming the installation is done on CentOS 7. We’ll use Oracle JDK for our setup. In my experiment, I had used 3 CentOS nodes, each having 4 cores and 8 gigs of ram. Let our master node be denoted as master, and the two slaves as slave-1 and slave-2. In case you have different number of nodes, it’s really easy to map it to your own setup. I used iTerm2 as terminal on my MacBook. This terminal emulator makes it easier to execute the same command across different nodes. If you’re using Mac, I’d suggest you give it a try. For Linux and Windows, there should be something similar.


  1. At first we need to ensure, that all three nodes have same JDK installed. If you need to know how to install Oracle JDK, you can refer to my article here. After ensuring it’s properly installed on all the nodes, we need to import the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy files. This is required for NameNodes. If we don’t import it, then while starting, NameNode will throw a security exception. The policy files for Java 8 can be found here. Once it’s downloaded, we need to unzip it, and copy the local_policy.jar and US_export_policy.jar to /usr/java/default/jre/lib/security on the cluster nodes.
  2. We also need to install ntp on all nodes. Execute the following command in all machines to install it.
    yum install -y ntp
  3. We need to ensure all cluster nodes can discover each other using their hostnames. For this we need to edit the /etc/hosts files. You can find the ip address of each machine using the command.
    ifconfig | grep inet

    It should list down couple of ip addresses. Look for the one that isn’t a loopback address, and that should be the host’s ip. Also execute the command

    hostname -f

    to get the FQDN of each machine. If we assume the FQDNs for our 3-node cluster are master, slave-1 and slave-2 then the /etc/hosts file for each node will look something like below.

    a1.b1.c1.d1 master
    a2.b2.c2.d2 slave-1
    a3.b3.c3.d3 slave-2
  4. We have to verify that python 2.7.x is installed on all nodes. To check the version of python we can execute
    python --version
  5. Make sure SElinux and firewalls are disabled on all nodes. I’m not adding the steps here, as there are plenty of guides floating around.
  6. Ambari tries to set the core file size limit to unlimited if it’s not already set. I had faced an issue with this. It threw an error saying that the limits could not be set properly, and hence my DataNodes didn’t work. So, I manually changed the settings in all nodes. Here, we need to open /etc/security/limits.conf file and set the following properties.
    * soft core unlimited
    * hard core unlimited
  7. I followed the Configure Password less ssh section of this guide for setting up, well, password less ssh. This will come in handy later. I had followed this guide sans the pdsh part, since that complicate things further. I did everything using iTerm2 without installing pdsh and it worked beautifully.

Setting up the master and the agents

  1. We need to execute the following command to download the ambari repo for each node.
    wget -O /etc/yum.repos.d/ambari.repo
  2. Now that we have the repo in our nodes, let’s install the ambari agent on all of them.
    yum install -y ambari-agent
  3. Once the agents are in place, let’s change the agent configs to point to the ambari server. In our case the master node’s FQDN is master. So, I’ll change it accordingly in all the agents. In your case, it’ll be different.
    sed -i ‘s/hostname=localhost/hostname=master/g’ /etc/ambari-agent/conf/ambari-agent.ini
  4. One more step, before we start the agents. We need to disable certification verification on all nodes, else agents won’t be able to connect to the server. To achieve this, let’s open up the following file using vi
    vi /etc/python/cert-verification.cfg

    and set

  5. Now, let’s start the agents on all machines.
    service ambari-agent start

    In case the agents face issues while coming up, the logs can be found in /var/log/ambari-agent/ambari-agent.log location, which might be useful in debugging.

  6. Now it’s time to install the ambari-server. Here I’m installing it on master node, which also runs the agent. You can also choose to run the server on an isolated node. The following command will install the server
    yum install -y ambari-server
  7. If you remember while installing java, we had set the JAVA_HOME variable in the bashrc file. In my case, let’s say JAVA_HOME=/usr/java/jdk1.8.0_192-amd64. So, I’ll start the setup of server with the command
    ambari-server setup --java-home /usr/java/jdk1.8.0_192-amd64
  8. While installation is going on, it’ll ask for some confirmations. In short, say no while being asked to customize user account for ambari-server daemon, and to enter advanced database configuration. Rest should be answered with yes.
  9. Once the installation is completed successfully, we can start the agent and the server on the master node.
    service ambari-agent start
    ambari-server start
  10. After the server is started successfully, we should be able to hit the 8080 port on the server through a browser. E.g. in our case, master:8080 should show a login page in browser. The default username and password are admin.
  11. Once we are logged in, click on the launch install wizard to continue.
  12. Choose a cluster name that you prefer, and move one.
  13. Then comes the part of choosing the Hadoop Software Stack. For me the options were HDP 2.4, 2.3 and 2.2. I chose 2.4 here.
  14. Once we’ve selected the stack, the next window will ask us to list down all the nodes. Here we need to put all FQDNs of our nodes.
  15. In the next step we have two options:
    1. We can either choose the manual method, where it assumes that ambari agents are already running on hosts. I had wasted 2 hrs trying to make this work, so I wouldn’t suggest it.
    2. The other option asks you to provide the ssh private keys for accessing the nodes. Please refer to point #7 in the prerequisites for this procedure.
  16. And that’s about it. Now the nodes should register to the cluster. Next you need to choose which softwares you want to install. Those are pretty straight-forward, so I’ll be skipping it.

In case you guys are facing any issues, or think I’ve missed something, or maybe you have some suggestions to improve this article, do let me know. Cheers.

Leave a Reply

Your email address will not be published. Required fields are marked *