Quantcast
Channel: VMware Communities: Message List - vSphere™ Storage
Viewing all articles
Browse latest Browse all 5826

Added NFS datastore, during SVM's really bad throughput, then timeout

$
0
0

In my lab, I have two microserver hosts running esxi/vcenter 5.1.  I also have an NFS server based on server 2012 that is presenting local storage to both a xenserver environment and the esxi cluster.

 

I'm running some storage v/xen motions to the NFS datastores this server is serving out from both the Xen and ESXi environments.

 

Since it's only a single local drive, I'm not expecting a lot of thorughput.  I'm seeing about 7-10Mbs on Xen (and no errors) but only about 800KB-1000KB/sec from my ESXI cluster (and after sitting there at 32% for a long time, it times out and gives an error).

 

Now this is a lab, so I'm running everything off a single interface in both cases (the servers only have a single NIC), but is there anything I should check on the ESXi/vCenter side that would help so this doesn't time out and go so slow?

 

I can provide more information as needed.

 

thnx

 

Errors include:

 

"Timed out waiting for migration data. "

 

"

A general system

error occurred: The

source detected that

the destination failed

to resume.

View details..."


Viewing all articles
Browse latest Browse all 5826

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>