Hi all,
Having an issue here with broken connectivity to NFS datastores from a variety of hosts in a flexpod setup.
Here are the facts:
- Network config has been checked:
- correct VLANs allowed on all trunks for L2 domain (VLAN2490)
- MTU is 9216 throughout switching
- VLAN in use is L2 and hosts/NAS appliances are directly connected
- All 8 hosts can mount the NFS datastores in question across each of the 4 x fas2240-2 controllers (2 'stores per head)
- In the past, all hosts have been able to mount and use all datastores. No changes have been made.
- Some hosts can still access some of the NFS stores fully, including "ls /vmfs/volumes/volume-label-here"
- These same hosts cannot access the same NFS stores once they are umounted/re-mounted
- On trying to list the filesystem after re-mounting it, it simply times out (greyed-out in vCenter during this time)
- The NAS units state (with "options nfs.mount.trace" turned on) that the hosts are allowed to use the export when they mount. e.g.:
Thu Jun 13 07:30:35 GMT [NAS-1-2: MNTPool09:info]: Client 10.0.249.17 (xid 1781438220) in mount, has access rights to path /vol/infra_datastore_1
- Hosts that are rebooted see no improvement
- Have not rebooted NAS controllers yet as this will be disruptive to live VMs
Relevant Host network interface:
vmk2: NFS-9000 unrouted VLAN2490, access port (untagged), MTU 9000, 10.0.249.17/26
Relevant NAS network interface:
Po12-2490 unrouted VLAN2490, trunk port (tagged), MTU 9000, 10.0.249.10/26
Export for NFS volume:
/vol/infra_datastore_1 -sec=sys,rw=10.0.249.0/26,root=10.0.249.0/26
ESXi version is 5.1.0-799733
DATA ONTAP version is 8.1.2
Any ideas?