Quantcast
Channel: VMware Communities: Message List - vSphere™ Storage
Viewing all 5826 articles
Browse latest View live

Re: Unable to create virtual machines in new datastore--frustrated beyond belief! :/

$
0
0

Thanks for pitching in!

 

I had put that project aside since it was driving me nuts, so I just started it all back up to give it another shot.

I got the SSD back in there and I will attempt the same stuff while I do a tail -f  /var/log/vmkernel.log and see if something else shines a light there.

I looked in the log when the ESXi box came up but it was just too full of stuff from just booting back up that it was a mess to grep.

 

I appreciate your help. I will get back to ya sometime before the end of the day.

Gotta finish the "real work" for today before I dive into that one

Cheers!


Re: Unable to create virtual machines in new datastore--frustrated beyond belief! :/

$
0
0

So, I decided to update the ESXi box to the latest v6.7.x release, but then had some issues with bad VIBs and stuff.

I decided it was just a good time to re-install the latest and greatest v6.7.x on a different SSD, and after installation and configuration of ESXi, I am back up.

 

Now, I figured that running a brand new install of ESXi 6.7.x on a brand new SSD boot device, I would see a different result with the silly datastore that doesn't want to accept virtual machines. Right? Nope. No joy.

 

At first I just went ahead and tried to create a new VM in the datastore, and had the same issues. I tracked the vmkernel.log for answers, and saw something odd.

Next, I decided to blow up that datastore, and re-create it under this new ESX install--wouldn't usually matter, but at this point, what the hell, right?

So, I formatted the SSD with VMFS-6 and tried again. No joy.

 

Saw the same issue in the vmkernel.log -- and while I don't understand why I am seeing that error (Google was not much help) -- at least we are getting somewhere. I have NO IDEA why it is saying something about NFS and file locking…it's a SATA-connected SSD!

Here is a cut and paste of the log, from the point I deleted the datastore (old name NewDataStore), and then re-created the datastore (new name NewDSTest) and then tried to create a VM in said datastore. I highlighted the important stuff in blue and red, below.

 

Let me know your thoughts, and thanks again for checking in!

 

 

 

2020-05-19T01:01:11.674Z cpu8:2099725 opID=9d3a37b3)World: 11943: VC opID esxui-4af7-b6e9 maps to vmkernel opID 9d3a37b3

2020-05-19T01:01:11.674Z cpu8:2099725 opID=9d3a37b3)NVDManagement: 1461: No nvdimms found on the system

2020-05-19T01:01:29.745Z cpu9:2099185 opID=7fe279ef)World: 11943: VC opID esxui-1420-b6f0 maps to vmkernel opID 7fe279ef

2020-05-19T01:01:29.745Z cpu9:2099185 opID=7fe279ef)LVM: 16781: File system '[NewDataStore, 5ea9ef7c-23eb5578-3738-000af7a1d9e1]' (LV 5ea9ef7c-0f448c28-45ae-000af7a1d9e1) un-mounted.

2020-05-19T01:02:26.334Z cpu2:2099217 opID=e2f330e7)World: 11943: VC opID esxui-61e3-b70b maps to vmkernel opID e2f330e7

2020-05-19T01:02:26.334Z cpu2:2099217 opID=e2f330e7)LVM: 4107: [naa.50025388500adb61:1] Device expanded (actual size 1465143297 blocks, stored size 1465109886 blocks)

2020-05-19T01:02:26.347Z cpu2:2099217 opID=e2f330e7)LVM: 4107: [naa.50025388500adb61:1] Device expanded (actual size 1465143297 blocks, stored size 1465109886 blocks)

2020-05-19T01:02:26.386Z cpu2:2099217 opID=e2f330e7)LVM: 4107: [naa.50025388500adb61:1] Device expanded (actual size 1465143297 blocks, stored size 1465109886 blocks)

2020-05-19T01:02:26.409Z cpu2:2099217 opID=e2f330e7)LVM: 4107: [naa.50025388500adb61:1] Device expanded (actual size 1465143297 blocks, stored size 1465109886 blocks)

2020-05-19T01:02:26.428Z cpu2:2099217 opID=e2f330e7)LVM: 4107: [naa.50025388500adb61:1] Device expanded (actual size 1465143297 blocks, stored size 1465109886 blocks)

2020-05-19T01:02:26.428Z cpu2:2099217 opID=e2f330e7)LVM: 10366: Device naa.50025388500adb61:1 doesn't support TRIM

2020-05-19T01:02:26.523Z cpu2:2099217 opID=e2f330e7)LVM: 10432: Initialized naa.50025388500adb61:1, devID 5ec33022-d8e789e8-dc45-000af7a1d9e1

2020-05-19T01:02:26.568Z cpu2:2099217 opID=e2f330e7)LVM: 13557: Deleting device <naa.50025388500adb61:1>dev OpenCount: 0, postRescan: False

2020-05-19T01:02:26.588Z cpu2:2099217 opID=e2f330e7)LVM: 10526: Zero volumeSize specified: using available space (750135542272).

2020-05-19T01:02:26.692Z cpu2:2099217 opID=e2f330e7)FS3: 183: <START lfb>

2020-05-19T01:02:26.692Z cpu2:2099217 opID=e2f330e7)signature 72666d64, version 1, flags 35, childMetaOff 4096, Bits/R 3, Aff/RC 840, Aff/R 52

2020-05-19T01:02:26.692Z cpu2:2099217 opID=e2f330e7)1397 resources, each of size 8192

2020-05-19T01:02:26.692Z cpu2:2099217 opID=e2f330e7)Organized as 11 CGs, 8 C/CG and 16 R/C

2020-05-19T01:02:26.692Z cpu2:2099217 opID=e2f330e7)CGsize 1114112. 0th CG at 65536.

2020-05-19T01:02:26.692Z cpu2:2099217 opID=e2f330e7)FS3: 185: <END lfb>

2020-05-19T01:02:26.692Z cpu2:2099217 opID=e2f330e7)FS3: 183: <START sfb>

2020-05-19T01:02:26.692Z cpu2:2099217 opID=e2f330e7)signature 72666d64, version 1, flags 2a, childMetaOff 0, Bits/R 2, Aff/RC 216, Aff/R 1

2020-05-19T01:02:26.692Z cpu2:2099217 opID=e2f330e7)715264 resources, each of size 0

2020-05-19T01:02:26.692Z cpu2:2099217 opID=e2f330e7)Organized as 175 CGs, 8 C/CG and 512 R/C

2020-05-19T01:02:26.692Z cpu2:2099217 opID=e2f330e7)CGsize 65536. 0th CG at 65536.

2020-05-19T01:02:26.692Z cpu2:2099217 opID=e2f330e7)child with 1114112 parent CGsize, 8 parent C/CG and 16 parent R/C

2020-05-19T01:02:26.692Z cpu2:2099217 opID=e2f330e7)FS3: 185: <END sfb>

2020-05-19T01:02:26.795Z cpu2:2099217 opID=e2f330e7)FS3: 183: <START fdc>

2020-05-19T01:02:26.795Z cpu2:2099217 opID=e2f330e7)signature 72666d64, version 1, flags 4, childMetaOff 0, Bits/R 1, Aff/RC 256, Aff/R 1

2020-05-19T01:02:26.795Z cpu2:2099217 opID=e2f330e7)16384 resources, each of size 8192

2020-05-19T01:02:26.795Z cpu2:2099217 opID=e2f330e7)Organized as 8 CGs, 8 C/CG and 256 R/C

2020-05-19T01:02:26.795Z cpu2:2099217 opID=e2f330e7)CGsize 16842752. 0th CG at 65536.

2020-05-19T01:02:26.795Z cpu2:2099217 opID=e2f330e7)FS3: 185: <END fdc>

2020-05-19T01:02:26.798Z cpu2:2099217 opID=e2f330e7)FS3: 183: <START pbc>

2020-05-19T01:02:26.798Z cpu2:2099217 opID=e2f330e7)signature 72666d64, version 1, flags 4, childMetaOff 0, Bits/R 1, Aff/RC 0, Aff/R 1

2020-05-19T01:02:26.798Z cpu2:2099217 opID=e2f330e7)0 resources, each of size 65536

2020-05-19T01:02:26.798Z cpu2:2099217 opID=e2f330e7)Organized as 0 CGs, 8 C/CG and 0 R/C

2020-05-19T01:02:26.798Z cpu2:2099217 opID=e2f330e7)CGsize 65536. 0th CG at 65536.

2020-05-19T01:02:26.798Z cpu2:2099217 opID=e2f330e7)FS3: 185: <END pbc>

2020-05-19T01:02:26.799Z cpu2:2099217 opID=e2f330e7)FS3: 183: <START sbc>

2020-05-19T01:02:26.799Z cpu2:2099217 opID=e2f330e7)signature 72666d64, version 1, flags 4, childMetaOff 0, Bits/R 1, Aff/RC 256, Aff/R 1

2020-05-19T01:02:26.799Z cpu2:2099217 opID=e2f330e7)5116 resources, each of size 65536

2020-05-19T01:02:26.799Z cpu2:2099217 opID=e2f330e7)Organized as 3 CGs, 8 C/CG and 256 R/C

2020-05-19T01:02:26.799Z cpu2:2099217 opID=e2f330e7)CGsize 134283264. 0th CG at 65536.

2020-05-19T01:02:26.799Z cpu2:2099217 opID=e2f330e7)FS3: 185: <END sbc>

2020-05-19T01:02:26.803Z cpu2:2099217 opID=e2f330e7)FS3: 183: <START pb2>

2020-05-19T01:02:26.803Z cpu2:2099217 opID=e2f330e7)signature 72666d64, version 1, flags 4, childMetaOff 0, Bits/R 1, Aff/RC 32, Aff/R 1

2020-05-19T01:02:26.803Z cpu2:2099217 opID=e2f330e7)256 resources, each of size 65536

2020-05-19T01:02:26.803Z cpu2:2099217 opID=e2f330e7)Organized as 1 CGs, 8 C/CG and 32 R/C

2020-05-19T01:02:26.803Z cpu2:2099217 opID=e2f330e7)CGsize 16842752. 0th CG at 65536.

2020-05-19T01:02:26.803Z cpu2:2099217 opID=e2f330e7)FS3: 185: <END pb2>

2020-05-19T01:02:26.805Z cpu2:2099217 opID=e2f330e7)Res3: 10702: SDDir: type: 0x5, fileLength: 0x11000, numBlocks: 1

2020-05-19T01:02:26.805Z cpu2:2099217 opID=e2f330e7)FS3: 183: <START jbc>

2020-05-19T01:02:26.805Z cpu2:2099217 opID=e2f330e7)signature 72666d64, version 1, flags 0, childMetaOff 0, Bits/R 1, Aff/RC 8, Aff/R 1

2020-05-19T01:02:26.805Z cpu2:2099217 opID=e2f330e7)128 resources, each of size 2097152

2020-05-19T01:02:26.805Z cpu2:2099217 opID=e2f330e7)Organized as 4 CGs, 4 C/CG and 8 R/C

2020-05-19T01:02:26.805Z cpu2:2099217 opID=e2f330e7)CGsize 67141632. 0th CG at 65536.

2020-05-19T01:02:26.805Z cpu2:2099217 opID=e2f330e7)FS3: 185: <END jbc>

2020-05-19T01:02:26.816Z cpu2:2099217 opID=e2f330e7)Vol3: 1684: Created VMFS-6.82 with config 0x6 on vol 'NewDSTest'

2020-05-19T01:02:26.862Z cpu2:2099217 opID=e2f330e7)WARNING: NFS: 1226: Invalid volume UUID 5ea9ef7c-0f448c28-45ae-000af7a1d9e1

2020-05-19T01:02:26.862Z cpu2:2099217 opID=e2f330e7)Vol3: 1299: Could not open device '5ea9ef7c-0f448c28-45ae-000af7a1d9e1' for volume open: No such target on adapter

2020-05-19T01:02:26.862Z cpu2:2099217 opID=e2f330e7)Vol3: 1299: Could not open device '5ea9ef7c-0f448c28-45ae-000af7a1d9e1' for volume open: No such target on adapter

2020-05-19T01:02:26.862Z cpu2:2099217 opID=e2f330e7)Vol3: 1299: Could not open device '5ea9ef7c-0f448c28-45ae-000af7a1d9e1' for volume open: No such target on adapter

2020-05-19T01:02:26.863Z cpu2:2099217 opID=e2f330e7)Vol3: 1299: Could not open device '5ea9ef7c-0f448c28-45ae-000af7a1d9e1' for volume open: No such target on adapter

2020-05-19T01:02:26.863Z cpu2:2099217 opID=e2f330e7)FSS: 6092: No FS driver claimed device '5ea9ef7c-0f448c28-45ae-000af7a1d9e1': No filesystem on the device

2020-05-19T01:02:26.917Z cpu2:2099217 opID=e2f330e7)WARNING: NFS: 1226: Invalid volume UUID 5ec33022-c8bfc8c0-df11-000af7a1d9e1

2020-05-19T01:02:26.963Z cpu2:2099217 opID=e2f330e7)LVM: 16770: File system '[NewDSTest, 5ec33022-e96e09c8-64ca-000af7a1d9e1]' (LV 5ec33022-c8bfc8c0-df11-000af7a1d9e1) mounted in 'rw' mode.

2020-05-19T01:02:27.350Z cpu21:2100355)WARNING: NFS: 1226: Invalid volume UUID naa.50025388500adb62:3

2020-05-19T01:02:27.405Z cpu21:2100355)FSS: 6092: No FS driver claimed device 'naa.50025388500adb62:3': No filesystem on the device

2020-05-19T01:02:27.475Z cpu18:2099218 opID=8ac2d861)World: 11943: VC opID esxui-6f03-b719 maps to vmkernel opID 8ac2d861

2020-05-19T01:02:27.475Z cpu18:2099218 opID=8ac2d861)VC: 4616: Device rescan time 214 msec (total number of devices 9)

2020-05-19T01:02:27.475Z cpu18:2099218 opID=8ac2d861)VC: 4619: Filesystem probe time 167 msec (devices probed 8 of 9)

2020-05-19T01:02:27.475Z cpu18:2099218 opID=8ac2d861)VC: 4621: Refresh open volume time 0 msec

2020-05-19T01:02:29.581Z cpu2:2097973)LVM: 16789: One or more LVM devices have been discovered.

2020-05-19T01:04:00.715Z cpu20:2099220 opID=d39e8bf9)World: 11943: VC opID esxui-a5e5-b766 maps to vmkernel opID d39e8bf9

2020-05-19T01:04:00.715Z cpu20:2099220 opID=d39e8bf9)Fil3: 8608: Retry 10 for caller Fil3_CreateFile (status 'Optimistic lock acquired by another host')

2020-05-19T01:05:36.987Z cpu8:2097931)DVFilter: 6054: Checking disconnected filters for timeouts

Re: Unable to remove datastore

Re: Unable to create virtual machines in new datastore--frustrated beyond belief! :/

$
0
0

For those playing along at home--and those that find this in the next decade of Internet decay--the only way to get this to work was using VMFS5.

Why? That is a question to ponder another day. I have no idea. The same SSD had been used as a VMFS6 in an earlier installation of ESXi 6.7.x -- maybe something changed somewhere on the way to Update03…who knows.

 

Cheers!

Re: Locked files - VMFS 6

$
0
0

Hi Ulli,

 

Can you explain how to extract the VM with Linux?

 

Best Regards,
Thiago

Re: Locked files - VMFS 6

$
0
0

Easiest option:

connect to the datastore with sshfs in readonly mode  - then use ddrescue against the flat.vmdks

If that does not work - try if you can get the location of the fragments with vmkfstools -p 0 flat.vmdk

If that does not work - try to get the location of the fragments by analysing the VMFS-metadata

If that does not work - find the first fragment with scalpel and hope that the flat.vmdks are allocated in one piece

iSCSI vs NFS

$
0
0

Hi

 

VMware has not released this paper's new version since 2012 - https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/storage_protocol_comparison-white-paper.pdf 

 

I went though some old post & got sense from all that NFS is better for below reasons

 

1. Easy Setup

2. Easy to expand

3. UNMAP is advantage on iSCSI

4. VMFS is quite fragile if you use Thin provisioned VMDKs. A single powerfailure can render a VMFS-volume unrecoverable.

5. NFS datastores immediately show the benefits of storage efficiency (deduplication, compresson, thin provisioning) from both the NetApp and vSphere perspectives

6. Netapp specific : The NetApp NFS Plug-In for VMware is a plug-in for ESXi hosts that allows them to use VAAI features with NFS datastores on ONTAP

7. Netapp specific : NFS has autogrow

8. When using NFS datastores, space is reclaimed immediately when a VM is deleted

9. Performance is almost identical

 

Please list out if i miss anything & share comments

 

 

 

Thanks

Re: iSCSI vs NFS

$
0
0

Moderator: Thread moved to the vSphere Storage area.


No able to increase LUN

$
0
0

HI

 

I have this stange situation and I hope someone can help

 

I have expanded a LUN and rescan storage, the adapter pick up the new size but when I try to increase the datastore I get empty

 

 

I dont know where to start looking to troubleshot this issue

 

Thanks a bunch

Re: No able to increase LUN

$
0
0

... when I try to increase the datastore I get empty

Please post some screenshots, which may help to better understand the exact issue.

 

André

Re: No able to increase LUN

$
0
0

Hi,

 

Are you trying to expand the datastore from Vcenter web client.

If yes then please try to connect to host ui or direct web client and attempt the same thing.

 

Additionally, since the ESXi host adapter did detect the new size then you can alternatively try to use Growfs to increase datastore size via command line.

see KB:VMware Knowledge Base

 

Hope this resolve the issue.

hot-extend shared vVol disk in MSCS

$
0
0

Hello,

 

We are evaluating the replacement of RDMs used as shared disks of MSCS deployments by vVol.

 

We are facing a blocking operational issue with this deployment as hot-extends of the shared vVol disks does not work. When trying to extend a vVol based clustered disk, extension fails:

Operation failed!

Task name

Reconfigure virtual machine

Target

xxx

Status

The disk extend operation failed: The virtual disk requires a feature not supported by this program

 

The only way to grow the disk is to power off both nodes of the cluster! Not really compatible with the type services that need failover cluster / MSCS deployments.

 

Searching for more info, we found that hot-extends of shared disks is officially not supported. Even if not supported, it perfectly works with RDMs. Is there a way to make that work with vVol even if not supported and without power off both nodes (one node to power off could be acceptable)?

 

Thanks in advance.

Re: Having issues getting vVOLs to work with 3PAR

$
0
0

Alright, burrowed around some more and here's the arrangement, the drivers you need on the worker side for some model CNAs in HP workers to help VVOLs with vSphere 6 have not yet been remembered for the HP pictures that are accessible for vSphere 6. Anyway they are remembered for the stock ESXi picture that you can download direct from VMware. So on the off chance that you utilize the VMware ESXi picture rather you ought to have the option to utilize VVOLs with 3PAR with no issues.

Re: Hosts losing access to NFS share

unable to increase SAN datastore in Esxi6.7

$
0
0

Hello Everyone,

we have a storage only have one pool and one volume, before it's mapped to one vSphere 6.5 hosts A by SAS cable, only one VM 01 is stored on this storage.

now two we have 2 of 6.7 host in one cluster, host B and host C. the storage now is also mapped to two new host by SAS cable as SAN datastore. migrated VM 01 to host host C from vCenter.

now in storage we increased capacity of volume, from host C, we can see total capacity of this SAN is correct, but we can't increase capacity of this SAN database.if select add extent, show no device with free space, if select expand, the capacity is not change of partition.

 

IncreaseDatastore.jpg


Re: iSCSI vs NFS

$
0
0

Based on your attached document, there is many other factors that represent the iSCSI is better than the NFS in some cases, like the following list:

1. VMware PSA and load-balancing feature that is enable for the iSCSI, FC & FCoE, not the NFS.

2. iSCSI supports CHAP for authentication and improving the security.

3. Raw Device Mapping (RDM) feature is not supported by the NFS, but the iSCSI can do.

4. Boot from SAN is possible via the iSCSI not the NFS.

But regardless of comparing storage protocols, in many situations you can obtain benefits of both iSCSI and NFS. Ease of implementing and configuration for the NFS is a good characteristic but for the most of advanced features there are many shortcoming, especially in the lower versions of the NFS.

Mark as flash Disk?

$
0
0

Hello all, I have a few questions about this feature.

1. What does "Mark as Flash Disk" actually do?

2. Is there an advantage of marking an SSD LUN which is falsely recognized as HDD as Flash Disk, in a non-vSAN cluster. In this case the LUN is presented from a SAN backed by a pool of physical SSD drives.

3. In a scenario where a LUN presented comes from a tiered SAN pool, i.e.: mix of HDD and SSD drives where Software-Defined SAN moves hot\cold blocks up and down from\to HDD and SSD, would it not be recommended to mark that LUN as FLash Disk?

Storage vMotion vs LUN resize

$
0
0

vSphere 6.7 (storage vMotion set to manual)

 

I inherited an environment where all the LUNs (NetApp SAN) were sized at 6TB.

 

The environment is growing, at least a few (2-5 of 100) VMs are constantly growing and I find myself resizing their (usually Windows Server 2012 R2) disks twice a year.

 

I have a few VMs that are 4TB, so I'm starting to get storage warnings on a few datastores.

 

I've done a few storage vMotions on smaller VMs and the performance seems pretty terrible: some of my applications actually react to the SAN latency.  Sigh...

 

I guess the answer is obvious?  Do I:

1. Create a few bigger LUNs (10TB) and move those bigger VMs to it?

2. Just resize my existing LUNs to 10TB, and then I don't need to move any disks?

Re: Storage vMotion vs LUN resize

$
0
0

Hi marcoshaw hope you are doing fine

 

I would go with option 1 since there is less risk involved.
Are you using thin provisioned disks?

 

Warm regards

Re: Storage vMotion vs LUN resize

$
0
0

Most of my disks are thick, but this is how I inherited things.

 

I'm not sure about #1...  When I do a svMotion, my applications are impacted and moving such a large VM could take a few hours and cause some applications issues.

 

I think I've determined that when my SAN has more than 5k write IOPS, the SP/CPU is almost at 100%.  This seems to cause latency accessing the disks, thus my applications begin to react negatively.

Viewing all 5826 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>