Quantcast
Channel: VMware Communities: Message List - vSphere™ Storage
Viewing all articles
Browse latest Browse all 5826

Re: NFS and LACP

$
0
0

Hello,

 

Sounds pretty good to me.  You'll find some that don't like the idea of using IP hash for storage networking, but their argument against it is that it adds complexity.  If done right, though, you'll achieve a somewhat balanced IP storage network.  The "somewhat" is based on the activity of VMs using particular datastores.  So you may very well send traffic down all four links, but one datastore sees the most activity and therefore, that one link (outside of a failure) could be utilized many times that of the other links.

 

What NetApp systems are you running and how many links will you use, per controller, for NFS traffic?

 

A few observations:

 

It would be simpler to use a single VMkernel port for NFS traffic at both types of sites.  I don't see the benefit of using two VMKernel ports.  You can still use all four vmnics, IP hash, and make them all Active, of course, MLACP or LACP at each site.  The load balancing, as you know, comes from the IP hashing the VMKernel IP address to the storage array IP address.

 

Since you want to use all four links from the ESXi host to the switch, you'll want four consecutive NFS IP addresses (one on the interface group and three aliases) on each controller.  You configure one storage array NFS IP address per link, not per datastore.  But you mount each datastore to one IP address using a manuel round-robin technique. For example, datastore1 to NFS-IP-1, datastore2 to NFS-IP-2, datastore 3 to NFS-IP-3, datastore4 to NFS-IP-4, then datastore5 to NFS-IP-1, and on.  The hashing works the same.  Using one VMKernel port, four uplinks, and IP hash this is how the hashing works out:

 

(Only converting the last octet to hex)

Links = 4 (0,1,2,3)

VMKernel port: 10.0.0.10 --> Hex: A

NFS1: 10.0.0.11 --> Hex: B

NFS2: 10.0.0.12 --> Hex: C

NFS3: 10.0.0.13 --> Hex: D

NFS4: 10.0.0.14 --> Hex: E


VMKernel to NFS IP hash

VMKernel xor NFS1 mod 4 = 1 <-- uses link 2

VMKernel xor NFS2 mod 4 = 2 <-- uses link 3

VMKernel xor NFS3 mod 4 = 3 <-- uses link 4

VMKernel xor NFS4 mod 4 = 0 <-- uses link 1

 

Just to be clear, it sounds like you're going to use dynamic multimode interface groups, NetApp's fancy way of saying LACP, which is what you want.  Even if you don't use four links from each controller to the switch(es), you'll still load balance on the way back, you'll just double-up usage on each link if you use two.

 

If you have many NFS datastores, don't forget to increase the default maximum number of mounted NFS volumes from 8 to something you're not likely to exceed.

 

Also, you might want to experiment with jumbo frames to see if your environment benefits from it.

 

It sounds like you've already read these, but check out the following whitepapers for more info.

 

Best Practices for Running vSphere on NFS Storage

 

http://www.netapp.com/us/media/tr-3802.pdf

 

Cheers,

 

Mike

 

http://VirtuallyMikeBrown.com

https://twitter.com/VirtuallyMikeB

http://LinkedIn.com/in/michaelbbrown


Viewing all articles
Browse latest Browse all 5826

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>