Before we begin
If you have not checked out what this series is about then please take a look at the previous parts below.
Management Cluster “Hardware”.
Part way through writing this blog series, VMware released version 3.5 of VCF. From this point on, all documentation referenced will be for VCF 3.5. If we check out the release notes for VCF 3.5, we can see that a new installation is broken down into 3 phases. Phase 1 is planning and prep, which I have already covered, Phase 2 is building the ESXi servers, which this blog post will cover and Phase 3 is deploying the VMware Cloud Foundation.
Minimum requirements for VCF management cluster is 4 hosts in a standard architecture deployment. This guide will walk through creating the 4 ESXi management hosts I outlined in the planning blog post. We will end up with something that resembles the below in Ravello.
Each host specification will consist of 4 CPU’s, 105 GB RAM, 2 NICS, and 6 disks to make up 2 disk groups consisting of 1 cache tier and 2 capacity tier disks. So why the odd amount of memory you may ask? This is the maximum amount per host I am able to assign to a bare metal instance in Ravello. I am working with 4 CPU’s per host as each CPU seems to consist of a single core. Most VM’s that will be running require a minimum of 4 vCPU. vRealize VM’s require 8, but I will cross that bridge when I come to it.
Each management host is deployed in the same way. The screenshots below will show which settings to change to make this work.
CPU and Memory
Name the host accordingly and then click on Advanced Configuration.
Change the cpu-model to Broadwell, this is case-sensitive, and change the preferPhysicalHost to true.
On the System tab, set the properties as below to allow support for ESXi 6.7.
Ok so there are 7 disks, not 6 as I said earlier, the 7th disk is for the OS. The layout is as follows:
- Disk 1 OS – 20GB – LSI Logic Parallel Controller – Important that this disk is at least 16GB in size
- Disk 2 vSAN Disk Group 1, Cache Disk 1 – 100GB – LSI Logic Parallel Controller
- Disk 3 vSAN Disk Group 2, Cache Disk 1 – 100GB – LSI Logic Parallel Controller
- Disk 4 vSAN Disk Group 1, Capacity Disk 1 – 1000GB – LSI Logic Parallel Controller
- Disk 5 vSAN Disk Group 1, Capacity Disk 2 – 1000GB – LSI Logic Parallel Controller
- Disk 6 vSAN Disk Group 2, Capacity Disk 1 – 1000GB – LSI Logic Parallel Controller
- Disk 7 vSAN Disk Group 2, Capacity Disk 1 – 1000GB – LSI Logic Parallel Controller
Note: You must attach an ISO to the cdrom or you will not be able to update the canvas. This will be the ESXi install ISO. Find out how to upload an ISO to Ravello in step 1 here.
Remove the SSH service from the VM if it is assigned by deleting it. This is only used for external access. You will still be able to SSH to the hosts from within the lab just fine.
This is where the fun begins. I would recommend checking out this blog post from Oracle about customising networks. It outlines how to create additional VLAN ID’s within a Ravello canvas.
In my planning guide, the management network for the hosts sits on VLAN ID 1611, I will show below how to tag a new VLAN to a NIC on the host. This principle can then be applied to all other VLANs that will be in use in the lab.
On the NICs tab, click Add VLAN interface and then enter the VLAN ID.
This will generate a warning that we need to address. VLAN ID 1611 has not been defined yet.
Follow the steps below to get to the Create VLAN dialogue.
Define the VLAN as below.
Now the VLAN needs to be associated with the switch port that the NIC is connected to.
We can now tag VLAN ID 1611 to the switch port.
Follow the steps above to create a VLAN IDs for the other types of traffic in use, vMotion, vSAN, NSX VTEP, and vRealize.
We created the basis for our VMware Cloud Foundation Managment cluster to run on top of in this blog post.
Check out Part 4: Management Cluster Software to see how to install ESXi onto the hosts and configure them in preparation for SDDC manager integration.