Monday, February 22, 2016

Veeam: How to enable Direct NFS Access backup Access feature

In this article we will configure our Veeam Backup Infrastructure to use Direct NFS Access transport mechanism. Since in this infrastructure we use iSCSI(as Backup Repository) and NFS(as VMware VMs Storage Datastore) we need to make some changes in our infrastructure to enable Direct NFS Access.

First I will try to share a design of a Veeam Backup Infrastructure without Direct NFS Access backup.



Note: Direct NFS Access backup transport mechanism is only available in Veeam v9

In above I try to design the Veeam Backup flow between iSCSI vs NFS.

In this case we did not had the proper configuration so that Direct NFS Access backup transport mechanism could work.

In this case we have a Veeam Backup Server and a Veeam Backup proxy.

Actual Veeam Backup Infrastructure:

192.100.27.x is iSCSI subnet - vLAN 56
192.128.23.x is the NFS Subnet vLAN 55

192.168.6.x(vLAN 25) is Management subnet. Used by Veeam Backup Server, vCenter and most of our ESXi hosts. But we still have some ESXi hosts that use our old management subnet 192.168.68.x
This is why we build a new Proxy with this subnet 192.68.68.x(vLAN 29)

Veeam Server I have(physical server):

1 interface(is 2 with NIC Teaming) for 192.168.6.x for Management Network.
1 Interface(also 2 with NIC Teaming) for 192.168.27.x using iSCSI initiator for the iSCSI connections.

Veeam Proxy(Virtual Machine):
1 interface with 192.168.68.x (vLAN 29) for Management Network.

This was the initial configuration and where Veeam Backup Server and Proxy never use the Direct NFS Access backup transport mechanism. All backups were running always with the option [nbd] for network block device (or network) mode, [hotadd] for virtual appliance mode.

Future Veeam Backup Infrastructure:So we need to add the following:

In my Veeam Server I have(physical server):
1 interface(is 2 with NIC Teaming) for 192.168.6.x for Management Network.
1 Interface(also 2 with NIC Teaming) for 192.168.27.x using iSCSI initiator for the iSCSI connections.
Add: 1 Interface with 192.168.23.x for NSF connections

Veeam Proxy(Virtual Machine):
1 interface with 192.168.68.x for Management Network.
Add: 1 Interface with 192.100.27.x using iSCSI initiator for the iSCSI connections.
Add: 1 Interface with 192.168.23.x for NFS connections

All new interfaces were proper configured to the right vLANs.

Note: All NFS interfaces subnets, or IPs(from Veeam Server and Veeam Proxy), need to have read and write permissions in the Storage NFS Share. So that Veeam and Storage can communicate through NFS.

After these changes we need to set up Veeam so that can use the proper transport mode.

Main configurations that we should check and config:
  • Make sure you are on Veeam v9
  • Make each Veeam VMware Backup Proxy have communication on the NFS network
  • Ensure the NFS Storage are allowing those proxies read and write permissions to the NFS share
  • Set the Proxies to use “Automatic Selection” transport
Limitations for the Direct NFS Access Mode

1. Veeam Backup & Replication cannot parse delta disks in the Direct NFS access mode. For this reason, the Direct NFS access mode has the following limitations:

  • The Direct NFS access mode cannot be used for VMs that have at least one snapshot.
  • Veeam Backup & Replication uses the Direct NFS transport mode to read and write VM data only during the first session of the replication job. During subsequent replication job sessions, the VM replica will already have one or more snapshots. For this reason, Veeam Backup & Replication will use another transport mode to write VM data to the datastore on the target side. The source side proxy will keep reading VM data from the source datastore in the Direct NFS transport mode.

2. If you enable the Enable VMware tools quiescence option in the job settings, Veeam Backup & Replication will not use the Direct NFS transport mode to process running Microsoft Windows VMs that have VMware Tools installed.

3. If a VM has some disks that cannot be processed in the Direct NFS access mode, Veeam Backup & Replication processes these VM disks in the Network transport mode.


The Direct NFS Access feature is implemented automatically if the “Direct storage access” or “Automatic selection” transport mode is selected from a VMware Backup Proxy inside of the Veeam Backup & Replication user interface.

First we need to Setup our Proxies(Default Veeam Proxy and the new Veem Proxy created) to run with the proper mode.

Go to Proxies and right mouse and choose properties:


After choosing the Proxy, lets check the Transport Mode and choose the right one.



We should choose Automatic Selection. Then the Proxy will choose the right Transport Mode to perform the Backup. If NFS Access is set, Transport Mode will use it in the VMs that are in NFS Share Volumes.

Option 3: Even Failover to Network mode is enable by default, you should check if is enabled. This will prevent the Backup Jobs to fail. If a Transport Mode is not available, Veeam will use the Network Mode(more slow performance).

We should perform all the above tasks for all our Proxies.(even the Default called VMware Backup Proxy)

After we change the Transport Mode in the Proxies section we now need to change in the Backup Jobs the Proxy that will use.



We should choose also in the job configuration the Automatic Selection(Veeam will choose the best Proxy for the Backup and NFS Direct).

If not all your Proxies have access to the Storage NFS, then you should follow the options 1-1 and 1-2 in the image above. Choose the Proxy that have connection and permissions in the Storage NFS and this job will always run with this Proxy.

Direct NFS Access will be enabled in all type of jobs, this example uses NetApp storage – and Enterprise Plus installations can use Backup from Storage snapshots for NetApp storage, all other editions can use Direct NFS Access.

Note: Next article we will talk about Veeam Backup from Storage snapshots for NetApp storage

We can check if backup is running with Direct NFS Access in the job log.


These are the Transport Modes that we can see in you Backup Job log.

[nfs] - Direct NFS Access mode.
[san] - iSCSI and Fibre Channel Mode(which doesn't work with virtual disks on NFS storage).
[nbd] - Network Block Device Mode(or just network, the same option we choose in the failover).
[hotadd] - Virtual Appliance Mode.

Note: All these Transport Mode will show in the Job Log after each Virtual Disk Backup.

This is the final design for Veeam Direct NFS Access backup transport mechanism flow.




Performance improvements that we will have with this new configuration:

The Direct NFS Access will deliver a significant improvement in terms of Read and Write I/O for the relevant job types, so definitely consider using it for your jobs. This improvement will help in a number of areas:
  • Significantly reduce the amount of time a VMware snapshot is open during a backup or replication job (especially the first run of a job or an active full backup)
  • Reduce the amount of time a job requires for extra steps (in particular the sequence of events for HotAdd) to mount and dismount the proxy disks
  • Increase I/O throughput on all job types
FINAL NOTE: I will like to thank to Rick Vanover from Veeam, for all the help and support to implement this Veeam Direct NFS Access. But also in some ideas to write this article(I even use some of is explanations). For all that, thanks again Rick

Hope this help you improve your Veeam Backup Infrastructure when using NFS.


9 comments:

  1. Great Diagrams, I was happy to see the Veeam Proxy VM get network paths better shown on the renewed diagram. However I noticed a couple things on your latest diagram I find odd;

    1) In your Hypervisor cluster it shows bonded nics for vmnics 4 & 5 for both iSCSI and NFS (making me believe you went ahead and moved vmnics 4&5 from SAN (NFS) dedicated to SAN (NFS and iSCSI dedicated) for two distinct subnets (AKA you trunked them) while this does work you really should have dedicated iSCSI nics per host with each NIC in it's own subnet to properly utilize MPIO with iSCSI (VMFS) datastores

    ReplyDelete
    Replies
    1. Hi Sewwy,

      First I dont use this Blog anymore, I have migrated to a new one www.provirtualzone.com.
      You can ask you questions in the new one: http://www.provirtualzone.com/veeam-how-to-enable-direct-nfs-access-backup-access-feature/


      Regarding your questions:

      In this configuration there is no iSCSI Datastores in the ESXi hosts. Datastores only use NFS Volumes.

      iSCSI is only configured in the Backup Repository(in this case the Veeam Backup Server). In the Veeam Backup Server and also in the Veeam Proxy, iSCSI and NFS are separated with different Network cards.
      This type of configuration is just to Veeam can flow/forward better the iSCSI and NFS (since is destination and source with different protocols).

      Tu use proper iSCSI LUNs in the Hypervisor you need to use port binding and different vmkernel and yes MPIO, not the case since this is just to Veeam use the same VLAN/subnet to able to Restore / Backup using the Veeam Proxy(that is a VM).

      Also NetApp doesn't have iSCSI, only NFS.

      Delete
    2. Thanks for the info however:

      1) comments are not synced between your old domain and your new domain, as if both are two completely different pages instead of simply linking the new domain to the old content.

      2) That's exactly how I just reconfigured my design,however I was thinking of adding vlans to my hypervisor trunk ports to add 2 subnets each for dedicated iSCI MPIO, the reason; Using just NFS even with multiple nics in the port-channel, when doing a svMotion from one datastore to another, you can only use a nic MAX. IN your case with 10 gig nics that's not that bad. For others such as myself who have (4 x 1gbps nics) having svmotino limited to 1 gbps is slow. So I can load balance teh MPIO over vlans, and let the trunk port-channel handle the load balanceing based on the two different IP and source. Thus being able to utilize more than what aan svMotion could with a single session over bonded nics with NFS.

      I plan to change mine in this amtter, I will blog about it as well and share it with you. Cause I love your network Diagram and am using it as a template for my own.

      Thanks for sharing your experience! Also if you are on Veeam forums whats your handle?

      Delete
    3. HI Zewwy,

      Again, this blog is close already. So no message sync between the blogs. So I only update the new blog. So try to check www.provirtualzone.com for new articles.

      We need always to look any article and check what is the best for our environments. If I have a 10Gb Interfaces or 1Gb, my configurations will be different.
      For 2 or more 1Gb Interfaces, yes I will use different vmkernel ports and different subnets/VLANs. This should be done for NFS or vMotion or even if we want to use new features in like provisioning TCP/IP stack.

      But this article was to focus in the Backup between iSCSI and NFS from Veeam Server / Veeam Proxy and ESXi hosts and how to handle those 2 type of Protocols.

      Maybe in the future a will do a better one were I can draw the flow for vMotion, NFS or iSCSI with MPIO and having more more then one subnet/VLAN and vmkernel. But the time is not much to do this kind of stuff ;)

      Thanks again for your comments.

      Delete
  2. This comment has been removed by the author.

    ReplyDelete
  3. Thanks for sharing Direct NFS Access transport mechanism. Actually i do not know how to use veeam proxies. But now i get clear idea how to setup all the things. Actually i am working with microleaves Dedicated Proxies Think to setup with it. Hope i will do as you indicate in the article and it will works fine for me.

    ReplyDelete
    Replies
    1. This is a old blog, I don't use or update anymore.

      The new blog is www.provirtualzone.com

      Thank You

      Delete