For a long time, I have wanted to build a Nutanix Community Edition (CE) cluster but never had the hardware to be able to do so. Enter Ravello by Oracle. I have covered deploying a VMware VCSA onto Ravello in a previous blog post if you would like to find out a bit more about what Ravello is.
Now those of you out there that are already familiar with Ravello and indeed Nutanix CE may already be aware that there is a pre-compiled Nutanix CE blueprint available in the Ravello repo. So why am I deploying my own lab from scratch? I wanted to know exactly how to build the lab in Ravello and also learn how to create a Nutanix cluster from the ground up. Deploying a CE image from the repo doesn’t really satisfy these goals, although I did draw some inspiration from the blueprint for the initial VM configuration.
So, without further ado, here is how to create your own Nutanix CE cluster on top of Oracle Ravellos cloud offering.
Step 1 – Join the Nutanix community.
Register for a Nutanix CE account here, you will need this to be able to download the software and also once the Nutanix cluster has been created.
Step 2 – Upload the Nutanix CE image into Ravello.
This step requires the Ravello upload tool, if you have not got it already, you can grab it now from here.
Prior to upload, we need to extract the IMG file from the compressed GZ file that is downloaded from the Nutanix CE download portal. You can use your favorite archive extraction tool, I personally used 7-zip portable.
Then open the Ravello upload tool and choose the upload disk or image option.
wait for the image to upload.
Step 3 – Create a new Ravello application.
For this setup, I am going to create a 3 node Nutanix cluster with the following specifications.
- 600GB capacity disk, 300GB cache disk
- Host IP addresses:
- Host 1: 10.0.0.10
- Host 2: 10.0.0.20
- Host 3: 10.0.0.30
- CVM IP addresses:
- Host 1: 10.0.0.11
- Host 2: 10.0.0.21
- Host 3: 10.0.0.31
Create a new application and give it a name.
Add 3 empty VM templates to the application.
Edit the advanced configuration, set the preferPhysicalHost value to true to enable bare-metal access. You may need to publish the application before this option becomes available. Publish the application as performance preferred and choose one of the 3 regions as shown in the second screenshot below.
Name the host. I named my 3 hosts NTXCE01, NTXCE02, and NTXCE03 respectively.
Assign the memory and CPU resources to the VM, also enable nested virtualisation.
Add the first 2 disks as shown below, in the order they are in below. You can amend the sizes if required.
The third disk is the Nutanix CE image we uploaded earlier, to attach it, choose the add disk based on image option.
Mark the disk as bootable.
If there was a CD-ROM drive attached, delete it.
Each host has a single NIC for this setup with 2 IP addresses assigned to it. The addresses are the host and CVM IP addresses. The below example is for host 1 (NTXCE01)
Expose the services that you like to be made available on a public IP address. For each VM these services are:
- Host Services
- SSH – Port 22
- CVM Services
- HTTP – Port 80
- HTTPS – Port 9440
- SFTP – Port 2222
- SSH – Port 22
- Tunnel – Port 2525
Once you have completed the above steps, you should end up with 3 similarly configured VM’s.
Step 4 – Install Nutanix CE.
Once you click save on the VM’s, they will automatically try power on and boot from the bootable IMG file.
The following steps need to be performed on each host. Open the console and follow the instructions, log in with username ‘install’ with no password.
This starts the install process. Choose your keyboard layout.
Set the host and CVM IP’s as we defined earlier. Scroll through the entire EULA and accept it. Installation will not continue unless you read all the EULA!
Wait for the install to complete.
At this point, you can log in to the host, but you do not need to just now.
Step 5 – Configure Nutanix cluster.
For the following steps, I followed the getting started guide for creating a Nutanix cluster here.
Take a look at the properties of one of the hosts, drop the external access down to the CVM IP address and grab the public IP address as highlighted below.
Bang the IP address into your favorite SSH client. I am using Putty for this. Log in with default credentials
<span style="color: #008000;">nutanix - nutanix/4u</span>
Now to create the cluster. You will need the 3 CVM IP addresses for this.
<span style="color: #008000;">cluster -s 10.0.0.11,10.0.0.21,10.0.0.31 create</span>
Wait for the cluster creation to complete
Give the cluster a name
<span style="color: #008000;">ncli cluster edit-params new-name=YourClusterName</span>
Add a DNS server for use with the cluster. I added the DNS server available in the Ravello application but it turned out the Google DNS servers were already defined
<span style="color: #008000;">ncli cluster add-to-name-servers servers="10.0.0.1"</span>
Step 6 – Post cluster config.
Now the cluster has been created, you should be able to log into Prism to continue the configuration process. Check the CVM properties again and click to open the https page. This will take you to the Prism login page.
Default login in details are:
<span style="color: #008000;">admin - nutanix/4u</span>
Change the password as requested.
Validate your Nutanix CE login details.
And you can check which version of Nutanix CE you are running.
The last few steps are to configure storage and network for use with the virtual machines.
Click on Home / Storage.
A storage pool is automatically created from the disks that were found during install. I renamed the default pool.
Next, you need to create a storage container. The container is where the AHV based virtual machines will reside. The container is created in the storage pool. It is recommended in the getting started guide to name the pool ‘default’
And below is the default settings associated with a storage container. At the time of writing, I have not tested the performance of having compression enabled, but I am expecting decent performance.
Click on configuration cog and select Network configuration
Select user VM interfaces and then create network
Name the network and specify VLAN ID. Use VLAN ID 0 if not using any VLANS.
If you would like to allow Nutanix to issue DHCP addresses, enale the setting as below.
And that’s it. That is the basic setup required to build a Nutanix CE cluster. Next steps would be to deploy a virtual machine and take care of any alerts that are present.
For some reason, the CVM disks decided to change their names after a dirty shutdown of the Nutanix Cluster, which stopped the CVM’s from booting at the next power on. I dont know if the following steps are supported, but it worked for me. Follow at your own risk.
Log onto the Nutanix host console with user root, password nutanix/4u
The command set for virsh KVM virtual machine management works here.
To get a list of all VM’s on the host, type the following:
<span style="color: #008000;">virsh list --all</span>
This will list the CVM VM, make a note of its name.
At first, I tried to just start the CVM VM with the command
<span style="color: #008000;">virsh start CVM_Name</span>
But received the following error; Failed to start domain CVM_Name. Then it goes on to state that the storage does not exist.
So let’s take a look at the directory the error is referencing. Sure enough, those disks do not exist, but some very similarly named disks do exist.
So each VM has a config file that can be edited. The config includes the location where the virtual disks reside.
You can access the CVM config by running the following command:
<span style="color: #008000;">virsh edit CVM_Name</span>
As you can see, it references the disks that no longer exist.
At this point, I had nothing to lose, so I pointed the config to the QEMU disks shown in the screenshot above.
The edit command opens a VI like interface. Use the INSERT key to start editing text. Save the config by hitting ESC the typing :wq followed by a carriage return.
I tried the start command again and voila.
Everything was reporting as happy in Prism Element following this procedure.