VMware Cloud Foundation, Will it run on Oracle Ravello? Part 3: Deploying management cluster hardware
Before we begin
If you have not checked out what this series is about then please take a look at the previous parts below.
Management Cluster “Hardware”.
Part way through writing this blog series, VMware released version 3.5 of VCF. From this point on, all documentation referenced will be for VCF 3.5. If we check out the release notes for VCF 3.5, we can see that a new installation is broken down into 3 phases. Phase 1 is planning and prep, which I have already covered, Phase 2 is building the ESXi servers, which this blog post will cover and Phase 3 is deploying the VMware Cloud Foundation.
Minimum requirements for VCF management cluster is 4 hosts in a standard architecture deployment. This guide will walk through creating the 4 ESXi management hosts I outlined in the planning blog post. We will end up with something that resembles the below in Ravello.
Each host specification will consist of 2 x 4 core CPU’s, 105 GB RAM, 2 NICS, and 3 disks to make up 1 disk groups consisting of 1 cache tier and 2 capacity tier disks. So why the odd amount of memory you may ask? This is the maximum amount per host I am able to assign to a bare metal instance in Ravello. vRealize VM’s deploy with 8 vCPU’s. If the hosts have any less than this, deployment will fail.
Host config
Each management host is deployed in the same way. The screenshots below will show which settings to change to make this work.
CPU and Memory
Name the host accordingly and then click on Advanced Configuration.
Set the coresPerSocket to 4, change the cpu-model to Broadwell, this is case-sensitive, and change the preferPhysicalHost to true.
On the System tab, set the properties as below to allow support for ESXi 6.7.
Disks
Ok so there are 4 disks, not 3 as I said earlier, the 4th disk is for the OS. The layout is as follows:
- Disk 1 OS – 20GB – LSI Logic Parallel Controller – Important that this disk is at least 16GB in size
- Disk 2 vSAN Disk Group 1, Cache Disk 1 – 200GB – LSI Logic Parallel Controller
- Disk 3 vSAN Disk Group 1, Capacity Disk 1 – 2000GB – LSI Logic Parallel Controller
- Disk 4 vSAN Disk Group 1, Capacity Disk 2 – 2000GB – LSI Logic Parallel Controller
Note: You must attach an ISO to the cdrom or you will not be able to update the canvas. This will be the ESXi install ISO. Find out how to upload an ISO to Ravello in step 1 here.
Services
Remove the SSH service from the VM if it is assigned by deleting it. This is only used for external access. You will still be able to SSH to the hosts from within the lab just fine.
Network
This is where the fun begins. I would recommend checking out this blog post from Oracle about customising networks. It outlines how to create additional VLAN ID’s within a Ravello canvas.
In my planning guide, the management network for the hosts sits on VLAN ID 1611, I will show below how to tag a new VLAN to a NIC on the host. This principle can then be applied to all other VLANs that will be in use in the lab.
On the NICs tab, click Add VLAN interface and then enter the VLAN ID.
This will generate a warning that we need to address. VLAN ID 1611 has not been defined yet.
Follow the steps below to get to the Create VLAN dialogue.
Define the VLAN as below.
Now the VLAN needs to be associated with the switch port that the NIC is connected to.
We can now tag VLAN ID 1611 to the switch port.
Follow the steps above to create a VLAN IDs for the other types of traffic in use, vMotion, vSAN, NSX VTEP, and vRealize.
Epilogue.
We created the basis for our VMware Cloud Foundation Management cluster to run on top of in this blog post.
Check out Part 4: Management Cluster Software to see how to install ESXi onto the hosts and configure them in preparation for SDDC manager integration.
2 Responses
[…] Part 3: Management Cluster “Hardware” […]
[…] Part 3: Management Cluster “Hardware” […]