HPE Nimble Storage SFA. Whats that about then? Part 3 – Volume creation and presentation to Veeam.

Nimble SFA Volume creation and presentation to Veeam

Following on from Part 2 – Initial array setup , of this blog series, we will take a look at how to create a new volume on the array and how to present it Veeam for use as a backup repository. There are a few considerations to make during the setup which I will cover here.

Configuration

The SFA is block based storage. It does not run any file services like other traditional deduplcation appliances so with this in mind, the storage needs to be presented to the host using either a Fibre Channel HBA or iSCSI software initiator / HBA. I will be using the software initiator on a Windows server-based system as Nimble has a very nice iSCSI management interface that is part of the Nimble Windows Toolkit. The Nimble Windows toolkit also checks for any issues with Windows patches during installation as well as altering iSCSI timeout values etc. If a critical patch is not installed that may have an impact on iSCSI traffic, it will alert you to this fact.

Create a volume on the SFA

To create a volume, log into the array, click on Manage / Data Storage and click the sign to start the new volume wizard.

NimbleSFA19

Define the settings as required for the volume. Name, Size, Access policies and Performance policy.

NimbleSFA18

Lets take a closer look at the Backup Repository performance policy. Notice that the policy has Deduplciation enabled by default. The other policies available do not. Makes sense.

NimbleSFA45

You may have noticed I have access policies set as well. Access will be limited by IQN and Chap access. These can be setup in the locations identified below.

NimbleSFA46

Add the volume to a Windows server

As I mentioned earlier, the storage needs to be added via iSCSI connection and I will be utilising the Nimble Connection Manager that is part of the Nimble Windows Toolkit to facilitate this.

Launch Nimble Connection Manager and click Add. it should discover your SFA array. I will assume you know which NICs to include and exclude for MPIO access.

NimbleSFA20

Once you have added the array, switch to the Nimble Volumes tab. The volume you just created should be listed.

NimbleSFA21

Highlight and click connect. Define CHAP credentials if you have configured CHAP access.

NimbleSFA22

Now switch to Windows disk manager. You should see a new uninitialised disk. NOTE If you have not done this already, run a command prompt as administrator and run the command “Diskpart Automount Disable” . This will stop volumes from auto mounting and potentially asking to format the partition. Initialise the disk.

NimbleSFA23

Format the volume and assign a drive letter. NOTE depending on how large the volume will eventually end up, select an appropriate allocation unit size. 4KB has a maximum volume size of 16TB. Check out this Microsoft post for more info on NTFS allocation unit sizes.

NimbleSFA24

Add the storage to Veeam as a backup repository

Launch Veeam backup and Replication, click into backup infrastructure and then select add backup repository.

NimbleSFA25

Give the repository a name.

NimbleSFA26

Choose storage type Microsoft Windows Server. Do not choose Deduplicating storage appliance.

NimbleSFA27

Choose the volume you created earlier in disk manager

NimbleSFA28

Note the following options as they deviate from out the box settings. Disable the limit maximum concurrent tasks. There is some debate on whether the limit concurrent tasks should be enabled, especially is utilising ReFS zero block clone technology in Windows server 2016. For the purposes of this setup though, I am sticking with the recommendations from Nimble. Click on advanced.

NimbleSFA29

Set the advanced options as follows.

NimbleSFA30

The Veeam integration guide states the following as to why the options above should be set:

  • Align backup data file blocks. The SFA uses a fixed block size for deduplication. Veeam aligns the VM
    data that is saved to a backup file to a 4 Kb block boundary. This option provides better deduplication
    across backup files, but it can result in greater amount of unused space on the storage device and a higher
    level of fragmentation.
  • Decompress backup data blocks before restoring. This option allows any data that is compressed across
    the network during a backup to be uncompressed before it lands on the SFA. Decompressing compressed
    data enables better deduplication rates on the SFA.
  • Use per-VM backup files. This option allows each VM being backed up to the SFA to be in its own backup
    chain. By default, Veeam places all of the VMs in a job into a single backup chain. For example, if the
    backup job contains 10 VMs, those 10 VMs are placed into one large backup chain. Per-VM backup would
    place each VM in this example into its own VM backup chain. The SFA can achieve better performance
    from this setting because it allows each VM to be its own data stream. Creating multiple streams of backup
    data produces better performance. This setting increases the write queue depth on the SFA.

Clicking OK and Next pops up with the following useful titbit. Veeam 9.5 loves to push ReFS for the zero block clone integration. The reason it popped up is I have the NTFS volume mounted on a Windows Server 2016 server. Server 2016 supports zero block clone technology but the NTFS volume does not, it would need to be formatted with ReFS for this to work, hence the popup. For the purposes of my testing though, I did not want to muddy the waters for the deduplication levels by introducing another space-saving technology.

NimbleSFA31

Define the mount server for vPower NFS.

NimbleSFA32

Review the settings.

NimbleSFA33

And you are done. The new repository will appear in the list of available repositories.

NimbleSFA34

Check out PART 4 for Veeam job configuration.

Ian

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.