Quantcast
Channel: VMware Communities: Message List - vSphere™ Storage
Viewing all 5826 articles
Browse latest View live

Re: Help understanding reclamation for thin provisioning

$
0
0

Need for space reclamation will be needed of there are following situation.

 

1. If there is a thin lun from the storage and you have deleted few vms or did a storage vmotion.

 

2. Esxi host does not do auto spce reclamation because it is an over head for hostd and vmkernel.

 

3 if you have VAAI feature then it can happen automatically because host will offload the task to the storage.


bnxtnet HWRM Hardware Resource Manager errors, ESXi 6.5

$
0
0

Seeing an issue attempting to connect to Dell SC storage via BCM57402 NICs using bnxtnet driver version 20.6.302.0-1OEM.650.0.0.4598673 on FW version 20.06.04.03 (boot code 20.06.77).

 

Dell branded BCM57402 adapters:

VID: 14e4

DID: 16d0

SVID: 14e4

SSID: 4020

 

Using in conjunction with software iSCSI adapter, 1 subnet, port binding enabled, 2 vmkernel adapters.

 

Disabled TSO and LRO, however issue remains.  Originally was using driver version bnxtnet 20.2.16.0, updated to 20.6.34.0 to align with HCL for FW version 20.06.77, however issue remained.  Updated driver to 20.6.302.0 and issue remains, vmkernel logs with bnxtnet debug logging enabled show:

 

2017-08-21T15:18:23.037Z cpu70:65725)WARNING: bnxtnet: hwrm_send_msg:201: [vmnic9 : 0x410029a96000] HWRM cmd error, cmd_type 0x90(HWRM_CFA_L2_FILTER_ALLOC) error 0x4(RESOURCE_ALLOC_ERROR) seq 2393

2017-08-21T15:18:23.037Z cpu70:65725)bnxtnet: bnxtnet_uplink_stop_rxq: 358 : [vmnic9 : 0x410029a96000] RXQ 1 stopped

2017-08-21T15:18:23.037Z cpu70:65725)bnxtnet: rxq_quiesce: 391 : [vmnic9 : 0x410029a96000] host stop rxq 1

2017-08-21T15:18:23.037Z cpu70:65725)bnxtnet: uplink_rxq_free: 639 : [vmnic9 : 0x410029a96000] uplink request to free rxq 1

2017-08-21T15:18:23.037Z cpu70:65725)bnxtnet: hwrm_send_msg: 151 : [vmnic9 : 0x410029a96000] HWRM send cmd (type: 0x41(HWRM_VNIC_FREE) seq 2394)

2017-08-21T15:18:23.038Z cpu70:65725)bnxtnet: hwrm_send_msg: 209 : [vmnic9 : 0x410029a96000] HWRM completed cmd (type: 0x41(HWRM_VNIC_FREE) seq 2394)

2017-08-21T15:18:23.038Z cpu70:65725)bnxtnet: bnxtnet_rxq_free: 1140 : [vmnic9 : 0x410029a96000] attempt to free rxq 1

2017-08-21T15:18:23.038Z cpu70:65725)bnxtnet: hwrm_send_msg: 151 : [vmnic9 : 0x410029a96000] HWRM send cmd (type: 0x51(HWRM_RING_FREE) seq 2395)

2017-08-21T15:18:23.038Z cpu54:66284)bnxtnet: bnxtnet_process_cmd_cmpl: 2126 : [vmnic9 : 0x410029a96000] HWRM cmd (type 0x20 seq 2395) completed

2017-08-21T15:18:23.038Z cpu70:65725)bnxtnet: hwrm_send_msg: 209 : [vmnic9 : 0x410029a96000] HWRM completed cmd (type: 0x51(HWRM_RING_FREE) seq 2395)

2017-08-21T15:18:23.039Z cpu70:65725)bnxtnet: hwrm_send_msg: 151 : [vmnic9 : 0x410029a96000] HWRM send cmd (type: 0x61(HWRM_RING_GRP_FREE) seq 2396)

2017-08-21T15:18:23.039Z cpu70:65725)bnxtnet: hwrm_send_msg: 209 : [vmnic9 : 0x410029a96000] HWRM completed cmd (type: 0x61(HWRM_RING_GRP_FREE) seq 2396)

2017-08-21T15:18:23.039Z cpu70:65725)bnxtnet: bnxtnet_rxq_free: 1192 : [vmnic9 : 0x410029a96000] freed rxq 1 successfully

2017-08-21T15:18:23.759Z cpu70:65725)bnxtnet: hwrm_send_msg: 151 : [vmnic8 : 0x4100299fa000] HWRM send cmd (type: 0x18(HWRM_FUNC_QSTATS) seq 40627)

2017-08-21T15:18:23.760Z cpu70:65725)bnxtnet: hwrm_send_msg: 209 : [vmnic8 : 0x4100299fa000] HWRM completed cmd (type: 0x18(HWRM_FUNC_QSTATS) seq 40627)

2017-08-21T15:18:23.760Z cpu70:65725)bnxtnet: bnxtnet_priv_stats_get_len: 2410 : [vmnic8 : 0x4100299fa000] driver private stats size: 40384

2017-08-21T15:18:23.760Z cpu70:65725)bnxtnet: bnxtnet_priv_stats_get: 2449 : [vmnic8 : 0x4100299fa000] requested stat buf size is 40385

2017-08-21T15:18:23.760Z cpu70:65725)bnxtnet: hwrm_send_msg: 151 : [vmnic11 : 0x41000f07a000] HWRM send cmd (type: 0x18(HWRM_FUNC_QSTATS) seq 36420)

2017-08-21T15:18:23.760Z cpu70:65725)bnxtnet: hwrm_send_msg: 209 : [vmnic11 : 0x41000f07a000] HWRM completed cmd (type: 0x18(HWRM_FUNC_QSTATS) seq 36420)

2017-08-21T15:18:23.761Z cpu70:65725)bnxtnet: bnxtnet_priv_stats_get_len: 2410 : [vmnic11 : 0x41000f07a000] driver private stats size: 40384

2017-08-21T15:18:23.761Z cpu70:65725)bnxtnet: bnxtnet_priv_stats_get: 2449 : [vmnic11 : 0x41000f07a000] requested stat buf size is 40385

2017-08-21T15:18:23.972Z cpu70:65725)bnxtnet: hwrm_send_msg: 151 : [vmnic8 : 0x4100299fa000] HWRM send cmd (type: 0x18(HWRM_FUNC_QSTATS) seq 40629)

2017-08-21T15:18:23.972Z cpu65:65725)bnxtnet: hwrm_send_msg: 209 : [vmnic8 : 0x4100299fa000] HWRM completed cmd (type: 0x18(HWRM_FUNC_QSTATS) seq 40629)

2017-08-21T15:18:23.972Z cpu65:65725)bnxtnet: bnxtnet_priv_stats_get_len: 2410 : [vmnic8 : 0x4100299fa000] driver private stats size: 40384

2017-08-21T15:18:23.972Z cpu65:65725)bnxtnet: bnxtnet_priv_stats_get: 2449 : [vmnic8 : 0x4100299fa000] requested stat buf size is 40385

2017-08-21T15:18:23.972Z cpu65:65725)bnxtnet: hwrm_send_msg: 151 : [vmnic11 : 0x41000f07a000] HWRM send cmd (type: 0x18(HWRM_FUNC_QSTATS) seq 36422)

2017-08-21T15:18:23.973Z cpu65:65725)bnxtnet: hwrm_send_msg: 209 : [vmnic11 : 0x41000f07a000] HWRM completed cmd (type: 0x18(HWRM_FUNC_QSTATS) seq 36422)

2017-08-21T15:18:23.973Z cpu65:65725)bnxtnet: bnxtnet_priv_stats_get_len: 2410 : [vmnic11 : 0x41000f07a000] driver private stats size: 40384

2017-08-21T15:18:23.973Z cpu65:65725)bnxtnet: bnxtnet_priv_stats_get: 2449 : [vmnic11 : 0x41000f07a000] requested stat buf size is 40385

 

At this point just curious if anyone has run into this, particularly HWRM_CFA_L2_FILTER_ALLOC errors.  Per https://reviews.freebsd.org/file/data/jhjnayad4lkbrjvsspn3/PHID-FILE-lbmmlmi2bjqh54xmpwsd/D6555.id16878.diff appears issue possibly related to NIC FW, however currently Dell only has version 20.06.04.03 (boot code 20.06.77) available.  Excerpt from freebsd.org indicating HWRM is FW related:

 

The Hardware Resource Manager (HWRM) manages various hardware resources
+ * inside the chip. The HWRM is implemented in firmware, and runs on embedded
+ * processors inside the chip

 

Connectivity is established, however seeing storage related issues on the array and in guest.  Any advice is welcome and than you in advance.

SCSI LUN id query

$
0
0

In the past there were issues observed on ESXi hosts due to the lun id mismatch. LUN id is not uniform across hosts in the cluster which causes issues with RDM vmotion / multipathing issues.

 

If a device is presented to a set of hosts where few hosts see the device as LUN1 for example and other hosts see it as LUN2, then the problem arises. So we have to make the presentation uniform to make all the host to see that particular device with same LUN number.

 

My query now is, if I have two storage boxes and each storage starts numbering the lun from LUN0. In such case if I present a LUN from storage1 to the cluster with LUN0 and present a  new LUN from second storage with LUN0 , so now we have to two devices with LUN0 from two different storage boxes, is this is supported ?

 

what would be the implications on host because of having more devices with same LUN number.

Re: SCSI LUN id query

Live migrate VM beatween VVOL datastore is slow

$
0
0

I take VM live migration between VVOL datastores. It's very slow. I checked the logs and find that it will createNewVirtualVolume on the destination datastore. Then it  will issue vaai xcopy, it will failed. Then it will call VASA method to copy data. I think the season of slow migrating is try VAAI. But I confused that from blogs, it saied

"

XCOPY – With VVOLs, ESX will always try to use array based VVOLs copy mechanism defined using copyDiffsToVirtualVolume or CloneVirtualVolume primitives. If these are not supported, it will fall back to software copy.

"

My quesiton is  why it didn't issue "copyDiffsToVirtualVolume or CloneVirtualVolume primitives"? Does live storage migration not support it?

Re: iSCSI and fibre from different ESXi hosts to the same datastores

$
0
0

Hi DanielVaknin,

we are in the same situation, did you have any answer from VMware or EMC? Is it supported?

Thanks

 

Maurizio

Re: iSCSI and fibre from different ESXi hosts to the same datastores

$
0
0

Hi VMmao,

 

This is the response I got from VMware:

"Yes, you can setup as you expected for migration purpose. That KB is correct as it is only for one ESXi host i.e; you can not expose one lun to one host using 2 protocols. However, in your case the FC hosts are different from existing iSCSI hosts. Just make sure you do not use this setup in production environment.

 

Also, I would like to inform that you may not get the "as expected" performance from the hosts. But, you can vMotion the VMs off the existing hosts to the new DC hosts. "

Re: iSCSI and fibre from different ESXi hosts to the same datastores

$
0
0

Great, thank you very much!

Bye

 

Maurizio


Re: bnxtnet HWRM Hardware Resource Manager errors, ESXi 6.5

$
0
0

Hi

 

I see the same Warnings.

Do you only see the Warning or do you have other problems too?

 

I see sometimes link up and a few seconds later link up.

Also I saw link up put no traffic pass to the iSCSI Array. Also a ping to the array iSCSI IP was not possible.

 

Bye

Re: Can not deploy from template -- error caused by file

$
0
0

I was using iSCSI storage and had to click "Rescan All" Under Configuration -> Storage within the vSphere host configuration screen and it works fine now.

Re: bnxtnet HWRM Hardware Resource Manager errors, ESXi 6.5

$
0
0

Hi,

 

 

Definitely looks like those NICs are not happy with that driver/firmware combo.

VMware VCG/HCL indicates that firmware has not yet been tested and certified

https://www.vmware.com/resources/compatibility/detail.php?deviceCategory=io&productid=41156&vcl=true

Latest tested FW is: 20.06.05.06.

Maybe try a rollback and test the official tested combo of Driver bnxtnet version 20.6.302.0 and FW 20.06.05.06.

"Dell Lifecycle Controller - Firmware Rollback" looks like it will do the trick:

Video + Documentation from Dell below:

https://youtu.be/Fc5PKr6quJQ

http://www.dell.com/support/manuals/ie/en/iebsdt1/poweredge-r730xd/lc_2.10.10.10_ug/configuring-lifecycle-controller-network-settings-?guid=guid-0e55ea43-a10b-4390-851e-f48df520d06a&lang=en-us

 

 

Failing that, suggest reaching out to Dell for suggestions on tested configurations and recommendations for known good driver/firmware combos for 6.5.

Re: SCSI LUN id query

$
0
0

This should not be causing any issues as the LUN is identified with the UUID provided by the host and will have unique and persistent NAA-ID.

Re: SCSI LUN id query

$
0
0

Thanks Aishwarya , I got the answer from VMware via SR.

Re: Synology Rackstation as iSCSI using ESXI 6.5

$
0
0

Can you help me as well? My VMware server lost connection to my Synology LUN after upgrade to version 6.5 and I can't connect to the volume again. I can see the Synology in devices but not in datastores.

Re: vsantraces folder in datastore

$
0
0

I'm having this issue on vSphere 6.0 Update 3, so it's not solved in any updates mentioned in the KB.

 

I think our hosts never run 5.5, but I'm not entirely sure about this. I am, however, sure that this problem prevents me from decommissioning our old iSCSI datastores.. Let me know if you have any tips, I have run the esxcli commands recommended, still need to see if a reboot fixes this (KB didn't mention anything about a reboot).


Re: Live migrate VM beatween VVOL datastore is slow

Re: vsantraces folder in datastore

$
0
0

Ok, so the _correct_ command is this:

esxcli vsan trace set -p /vmfs/volumes/<name>/<path>

 

And make sure that the entire folder structure exists, after that you can remove the datastore

Re: Help understanding reclamation for thin provisioning

$
0
0

Hi,

Starting from vSphere 6.5 these reclamation tasks are automated. with the HW version 11 the new SCSI adaptor will send the unmap command to guest OS and if the guest OS supports this unmap it will work.

We have a GUI option to unmap the datastore unused space. also, you can set the interval how frequent this unmap should run.

 

Hope the following article will help you to get more details on this

 

https://storagehub.vmware.com/#!/vsphere-storage/vsphere-6-5-storage/unmap

Datastore Sizing on NFS

$
0
0

I'm using VMware on NetApp NFS All flash FAS where there is an option to autogrow volumes instead of creating a new volume and VMware datastore when I run out of space. 

 

Traditionaly I've been using a standard 4TB datastore size.

 

Is it better to keep a standard datastore size of 4TB and create new volumes when I run out of space or should I use Autogrow and let volumes become different sizes as autogrow grows them?

Re: VOMA - LVM Major or Minor version Mismatch, Failed to Initialize LVM Metadata

$
0
0

Hello,

 

Is this Datastore consist of extents?

 

esxcli storage vmfs extent list

 

Regards,

Mario

Viewing all 5826 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>