Quantcast
Channel: VMware Communities: Message List - vSphere™ Storage
Viewing all 5826 articles
Browse latest View live

Re: Improving Stun Times


Cannot reclaim space on VMFS store

$
0
0

Hey, hoping someone can point me in the right direction.

 

I recently took over an ESXi instance, and have run into space issues on the box.

 

Physical Server: HP Proliant DL380p Gen 8

OS: ESXi 6.5 U2 Sept 2018 – Last Pre-Gen9 custom image

License: VMware vSphere 6 Enterprise Plus

 

I have a datastore, which is 2.73TB in size, and is VMFS5. Thin Provisioning is supported.

All the VM's are Thin Provisioned, but ESXi is not reclaiming space from deleted VM's, deleted files on the VM's, or deleted Snapshots.

 

Checking the disk information:

 

[root@esxi01:~] esxcli storage core device list -d naa.600508b1001caf2ad46535555b3e0206

naa.600508b1001caf2ad46535555b3e0206

   Display Name: Local HP Disk (naa.600508b1001caf2ad46535555b3e0206)

   Has Settable Display Name: true

   Size: 2861511

   Device Type: Direct-Access

   Multipath Plugin: NMP

   Devfs Path: /vmfs/devices/disks/naa.600508b1001caf2ad46535555b3e0206

   Vendor: HP     

   Model: LOGICAL VOLUME 

   Revision: 5.42

   SCSI Level: 5

   Is Pseudo: false

   Status: on

   Is RDM Capable: true

   Is Local: true

   Is Removable: false

   Is SSD: false

   Is VVOL PE: false

   Is Offline: false

   Is Perennially Reserved: false

   Queue Full Sample Size: 0

   Queue Full Threshold: 0

   Thin Provisioning Status: unknown

   Attached Filters:

  VAAI Status: unsupported

   Other UIDs: vml.0200020000600508b1001caf2ad46535555b3e02064c4f47494341

   Is Shared Clusterwide: false

   Is Local SAS Device: true

   Is SAS: true

   Is USB: false

   Is Boot USB Device: false

   Is Boot Device: false

   Device Max Queue Depth: 1024

   No of outstanding IOs with competing worlds: 32

   Drive Type: unknown

   RAID Level: unknown

   Number of Physical Drives: unknown

   Protection Enabled: false

   PI Activated: false

   PI Type: 0

   PI Protection Mask: NO PROTECTION

   Supported Guard Types: NO GUARD SUPPORT

   DIX Enabled: false

   DIX Guard Type: NO GUARD SUPPORT

   Emulated DIX/DIF Enabled: false

 

[root@esxi01:~] esxcli storage core device vaai status get -d naa.600508b1001caf2ad46535555b3e0206

naa.600508b1001caf2ad46535555b3e0206

   VAAI Plugin Name:

   ATS Status: unsupported

   Clone Status: unsupported

   Zero Status: unsupported

   Delete Status: unsupported

 

And i've checked the related configurations in esxi

KeyNameValueDefaultOverridden

DataMover.HardwareAcceleratedInit

Enable hardware accelerated VMFS data initialization (requires compliant hardware)

1

1

No

DataMover.HardwareAcceleratedMove

Enable hardware accelerated VMFS data movement (requires compliant hardware)

1

1

No

DataMover.MaxHeapSize

Maximum size of the heap in MB used for data movement

64

64

No

VMFS3.HardwareAcceleratedLocking

Enable hardware accelerated VMFS locking (requires compliant hardware). Please see http://kb.vmware.com/kb/2094604 before disabling this option

1

1

No

 

Trying the manual unmap also doesn't work

[root@esxi01:~] vmkfstools -y /vmfs/volumes/datastore2/

Volume '/vmfs/volumes/datastore2/' spans device 'naa.600508b1001caf2ad46535555b3e0206:1' that does not support unmap.

Devices backing volume /vmfs/volumes/datastore2/ do not support UNMAP.

 

So really I'm at a loss on where the problem is. Is it the physical disks, or the LUN.
Is there any way to enable this unmap command without having to rebuild the entire datastore?

 

Can anyone point me in the right direction?

 

Thanks!

 

 

 

Re: Cannot reclaim space on VMFS store

$
0
0

Moderator: Moved to vSphere Storage

Re: Cannot reclaim space on VMFS store

$
0
0

I believe VAAI needs to be supported to use unmap

Re: Cannot reclaim space on VMFS store

$
0
0

I have found lots of conflicting information online, some saying VMFS5 will support this, and some saying it does not.

 

I'm going out on a limb and say it doesn't.

 

I did find another workaround - but it's manual. A quick bash script will make it less tedious.

 

using vmkfstools -K (punchzero) on the VMDK files for the largest VM's will recover space on the datastore.

ESXi 6.5 storage share presentation and vVols

$
0
0

Hi all

I would like to ask you for some official documentation regarding Vmware Best Practices on mounting and presenting the volume shares and so vVols from a NetApp storage to a ESXi 6.5 solution designed for a private cloud.

Thank you,

Rafael

Re: ESXi 6.5 storage share presentation and vVols

Re: ESXi 6.5 storage share presentation and vVols

$
0
0

Ciao

Davvero mia domanda è se quale la miglior maniera di fare i montaggi e presentazioni dei volumi dalla NetApp (senza HCI) fino al vSphere datastore, considerando che il datastore è su una rete appartata del vSphere

Grazie


Re: ESXi 6.5 storage share presentation and vVols

$
0
0

Moderator: Moved to vSphere Storage

VMFS Problem

$
0
0

Hi, i have a vSphere 6.5, with 1 VM, after a crash in vm, i reboot the machine and after that my datastore dont show any more. Someone can help to restore vmfs partition ? There is the logs:

 

vmkernel:

2019-12-20T17:50:35.156Z cpu5:2097881)NMP: nmp_ThrottleLogForDevice:3802: Cmd 0x28 (0x459a40bcac00, 2098547) to dev "naa.5000c500b702bf3b" on path "vmhba0:C0:T0:L0" Failed: H:0x0 D:0x2 P:0x0 Va

2019-12-20T17:50:35.156Z cpu5:2097881)ScsiDeviceIO: 3449: Cmd(0x459a40bcac00) 0x28, CmdSN 0x1 from world 2098547 to dev "naa.5000c500b702bf3b" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x3 0x1

2019-12-20T17:50:35.749Z cpu3:2097419)WARNING: NFS: 1227: Invalid volume UUID mpx.vmhba1:C0:T4:L0

2019-12-20T17:50:35.749Z cpu1:2097419)FSS: 6092: No FS driver claimed device 'mpx.vmhba1:C0:T4:L0': No filesystem on the device

2019-12-20T17:50:35.802Z cpu8:2097412)WARNING: NFS: 1227: Invalid volume UUID naa.5000c500b702bf3b:3

2019-12-20T17:50:35.820Z cpu8:2097412)FSS: 6092: No FS driver claimed device 'naa.5000c500b702bf3b:3': No filesystem on the device

______________________________________________________________________________________________________________________________________________

[root@vmware04:/dev/disks] fdisk -l /vmfs/devices/disks/naa.5000c500b702bf3b

 

*

* The fdisk command is deprecated: fdisk does not handle GPT partitions.  Please use partedUtil

*

 

Found valid GPT with protective MBR; using GPT

 

 

Disk /vmfs/devices/disks/naa.5000c500b702bf3b: 1172123568 sectors, 2981M

Logical sector size: 512

Disk identifier (GUID): cb59f4eb-4a6a-4ff2-8a3b-e0a95a604c22

Partition table holds up to 128 entries

First usable sector is 34, last usable sector is 1172123534

 

 

Number  Start (sector)    End (sector)  Size Name

     1              64            8191 4064K

     2         7086080        15472639 4095M

     3        15472640      1170997214  550G

     4            8224          520191  249M

     5          520224         1032191  249M

     6         1032224         1257471  109M

     7         1257504         1843199  285M

     8         1843200         7086079 2560M

___________________________________________________________________________________________________________________________

[root@vmware04:/dev/disks] partedUtil getptbl /vmfs/devices/disks/naa.5000c500b702bf3b

gpt

72961 255 63 1172123568

1 64 8191 C12A7328F81F11D2BA4B00A0C93EC93B systemPartition 128

4 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0

5 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0

6 1032224 1257471 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0

7 1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0

8 1843200 7086079 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0

2 7086080 15472639 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0

3 15472640 1170997214 AA31E02A400F11DB9590000C2911D1B8 vmfs 0

________________________________________________________________________________________________________________________

Mount Point                                        Volume Name  UUID                                 Mounted  Type        Size        Free

-------------------------------------------------  -----------  -----------------------------------  -------  ----  ----------  ----------

/vmfs/volumes/d764097a-908b92d4-180b-7d7633d7d443               d764097a-908b92d4-180b-7d7633d7d443     true  vfat   261853184   106246144

/vmfs/volumes/5da1138f-8c37d4c0-f8a6-842b2b688783               5da1138f-8c37d4c0-f8a6-842b2b688783     true  vfat  4293591040  4273799168

/vmfs/volumes/6132ef79-569f72e4-1515-4428b45bee3e               6132ef79-569f72e4-1515-4428b45bee3e     true  vfat   261853184   261840896

/vmfs/volumes/5da11380-395afa60-2963-842b2b688783               5da11380-395afa60-2963-842b2b688783     true  vfat   299712512   117432320

___________________________________________________________________________________________________________________________

[root@vmware04:/dev/disks] esxcli storage core device smart get -d naa.5000c500b702bf3b

Parameter                     Value              Threshold  Worst

----------------------------  -----------------  ---------  -----

Health Status                 IMPENDING FAILURE  N/A        N/A

Media Wearout Indicator       N/A                N/A        N/A

Write Error Count             1228               N/A        N/A

Read Error Count              20226778           N/A        N/A

Power-on Hours                N/A                N/A        N/A

Power Cycle Count             N/A                N/A        N/A

Reallocated Sector Count      N/A                N/A        N/A

Raw Read Error Rate           N/A                N/A        N/A

Drive Temperature             46                 N/A        N/A

Driver Rated Max Temperature  N/A                N/A        N/A

Write Sectors TOT Count       N/A                N/A        N/A

Read Sectors TOT Count        N/A                N/A        N/A

Initial Bad Block Count       N/A                N/A        N/A

Re: VMFS Problem

$
0
0

I try this and i recive this error:

 

Running VMFS Checker version 2.1 in default mode

Initializing LVM metadata, Basic Checks will be done

         ERROR: IO failed: Input/output error

         ERROR: IO failed: Input/output error

Initializing LVM metadata..|

LVM magic not found at expected Offset,

It might take long time to search in rest of the disk.

 

 

VMware ESX Question:

Do you want to continue (Y/N)?

 

 

0) _Yes

1) _No

 

 

Select a number from 0-1: 0

 

 

         ERROR: IO failed: Input/output error

         ERROR: Failed to Initialize LVM Metadata

   VOMA failed to check device : IO error

 

 

Total Errors Found:           0

   Kindly Consult VMware Support for further assistance

Re: VMFS Problem

$
0
0

Moderator: Moved to vSphere Storage

Re: VMFS Problem

$
0
0

The disk mounted as read only, i tri write somethin in vmfs and i recive this erros. There is some kind of fsck in vmware console ?

Re: VMware Disk Provisioning with SSD Flash Storage

$
0
0

The Exchange Server writes its data and also creating new items like email to the transaction log files before committing them to the information stores. So each of the mailbox database modification/paging will be written to a memory cache, and after a time duration, the Exchange server will commit the cache contents into the related information stores (of course operation like application-aware backups will do it too). Then we could say for better performance of the Exchange server, SSD disks that is used as a flash-based datastore for keeping the only dedicated VMDK for the storing of transaction logs will be a good best practice.

SD clone

$
0
0

Hi, I'm trying to upgrade from vmware 5 to vmware 6.7.

My servers have a vmware boot on sd card and I think that is a good idea backing it or clone it before the upgrade.

 

I found this article on Internet https://www.virten.net/2014/12/clone-esxi-installations-on-sd-cards-or-usb-flash-drives/

https://www.virten.net/2014/12/clone-esxi-installations-on-sd-cards-or-usb-flash-drives/

 

but when I cloned the sd card I have this issue

dd: /dev/disks/mpx.vmhba32:C0:T0:L0: Function not implemented

 

Another way is to remove off the sd card from the server and clone it on Windows 10 software, but I don know if this way is a good way to do that.

 

Could anyone help me to know what happen with dd command?

 

Kinds regards


Re: SD clone

$
0
0

Hello,

dd will read block by block and write it to another device.

You can take you card out, put into linux and device and run dd from there or apparently this free app

https://sourceforge.net/projects/win32diskimager/ can do it for you in windows (not tested).

 

Good luck

R. Mitura

Re: SD clone

$
0
0

Maybe not the answer to your question, but did you consider to use another SD card for the new version?

Depending on the complexity of your setup, this might be the best option anyway, and lets ESXi 6.7 create its additional partitions (compared to ESXi 5.x).

This way you don't need to backup/restore the curent SD, and you can always just insert it again, if something doesn't work as expected with the new setup.

 

André

Re: SD clone

$
0
0

Hello, thank you for your answers.

 

Of course, I change the sd card for a new sd card, but I have some curiosity to know why the dd returns this issue.

 

dd if=/dev/disks/mpx.vmhba32:C0:T0:L0 of=/vmfs/volumes/55893041-69c53340-717b-ecb1d7893c64/Isos/backup_host_26/server_backup_26.img

dd: /dev/disks/mpx.vmhba32:C0:T0:L0: Function not implemented

 

Kinds Regards

Re: VMFS Problem

$
0
0

If you really ensure about the Read-Only VMFS datastore, please look at this thread in the community

Re: VMFS Problem

Viewing all 5826 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>