Quantcast
Channel: VMware Communities: Message List - vSphere™ Storage
Viewing all 5826 articles
Browse latest View live

Using Windows Storage Server 2012 as iSCSI Target in Production Environment

$
0
0

My searches in this forum haven't yielded discussions on this.  Please forgive me if I missed previous posts.

Would you use a Windows Server 2012 R2 as your (only) iSCSI target / shared storage for your 3 hosts in a production system?  We are looking at a new system with 3 Lenovo RD550 servers, each with 16 cores and 128G RAM, and 1 Windows Storage Server 2012 R2 with lots of SSD drives and connecting it all with dual 10G Ethernet for the iSCSI network.

The cost savings over even an inexpensive EMC is pretty substantial.  My concern is that Windows servers need care and feeding, and storage systems need to just run (and run, and run, and...)  What do you think?

Thanks


Re: VMFS lock mechanism

$
0
0

Hi there,

 

What vendor and model is your storage?

 

You can set the SATP claim rule if you wish and also the PSP, but it might be worth first reviewing the best practices for your particular array to make sure it's all hooked up to give the best performance and resilience.

How can I shrink a thin provisioned vmdk file?

$
0
0

Hi,

 

I want to shrink a thin provisioned vmdk file, because in the affected virtual machine OS there are many GBs free, but the thin provisioned disk is nearly the same size like a thick provisioned disk.

Regarding the VMware KB2004155, a SVMotion process to another datastore does not shrink the vmdk file.

I have a vSphere 6 environment with VMFS5 datastores only. So (I think) only a block size of 1MB is possible. Maybe this KB article is a little bit confusing.

 

Maybe somebody has a good ides how I can shrink a thin provisioned disk?

 

Thanks and Best Regards,

 

Andre

Re: How can I shrink a thin provisioned vmdk file?

$
0
0

Due to the unified block size of 1MB, Storage vMotion will not help. However, the KB article you mentioned also contains the steps to reclaim zeroed blocks from the thin provisioned disk using the vmkfstools command line utility.

 

André

Re: How can I shrink a thin provisioned vmdk file?

$
0
0

Hi André,

 

but there is no option to reclaim zeroed blocks from the thin provisioned disk without downtime of the affected VM?

 

Regards,

 

Andre

Re: How can I shrink a thin provisioned vmdk file?

$
0
0

That's unfortunately true. What I could think of - didn't try this myself so far yet - is to Storage vMotion the VM twice (if there's sufficient free space on another datasatore). In the first step select thick provisioned as the target format, and for migrating the VM back to its original datastore, select thin provisioned.

 

André

Re: How can I shrink a thin provisioned vmdk file?

$
0
0

Hi André,

 

this task we have tested already, because my hope also was to shrink the file in that way.

But unfortunately nothiung happened during both SVMotion's...

 

Regards,

 

Andre

Re: How can I shrink a thin provisioned vmdk file?

$
0
0

Please don't mind me asking, but did you select the target virtual disk format in the wizard, and did you zero out unused blocks within the guest OS using e.g. sdelete as mentioned in the KB article?

What I am thinking of is:

  1. migrate the VM to another datstore, selecting "Thick Provisioned" as the target format
  2. run sdelete within the guest OS to zero out unused blocks
  3. migrate the VM back to the original datastore, selecting "Thin Provisioned" in the wizard

 

André


An unmanaged I/O workload is detected on a SIOC-enabled datastore

$
0
0

Below mentioned Even got triggered from  ESXi host .Please suggest on this

 

An unmanaged I/O workload is detected on a SIOC-enabled datastore

Re: An unmanaged I/O workload is detected on a SIOC-enabled datastore

$
0
0

Moderator note: Technical question, so moved to a relevant forum area - vSphere Storage

Re: Using Windows Storage Server 2012 as iSCSI Target in Production Environment

Re: Using Windows Storage Server 2012 as iSCSI Target in Production Environment

Re: VMFS lock mechanism

$
0
0

Dear martinriley,

 

The storage is a shared two nodes iSCSI, low end provided by Infotrend, and it does not support ALUA, only a floating VIP to handle failover.

Yes, I can set the SATP claim rule and I think it should be VMW_SATP_DEFAULT_AP, however it does not work

Re: Using Windows Storage Server 2012 as iSCSI Target in Production Environment

$
0
0

Stick with supported platform from HCL, running production workload on unsupported systems is asking for trouble.

New SSD Not Showing in vCenter or ESXi

$
0
0

Hello All,

 

I'm fairly new to VMWare/ESXi.  I just added (1) SSD to each of my three hosts, but it is not showing anywhere in vCenter.  I rescanned all three hosts to discover new storage/vmfs volume and nothing.  I also ran various commands (esxcli storage core path list, esxcli storage core device list, esxcli storage vmfs extent list, esxcli storage filesystem list), and can only see the two HDDs that already existed.

 

The SSDs we installed are Seagate ST4000DX001 (hybrid SSD).  In addition, we are using ESXi 6.0U2.  I appreciate any help on resolving this so that we can enable VSAN.  Thanks!

 

-Octavia


Obscure behaviour of a snapshot - looking for an explanation

$
0
0

I am working at a recovery case that looks really really strange.

 

When I was called everything looked like a normal case of the "my datastore suddenly lost all VMs" issue.
And indeed - looking at the datastore using ESXi itself - all that was visible were the 6 hidden .*.sf files and the one .sdd.sf directory.
I am used to that so I followed my usual approach and looked at the datastore with vmfs-tools (using a good build and not the outdated version thatz comes with Ubuntu)
Same result - only the sf - metadatafiles were visible.
Next I made a VMFSheader-dump and went home for a more detailed analysis.
Inside the header-dump I then found 3 vmx-files and 6 vmx-backups, 1 vmsd, 2 vmxf, 3 descriptors for basedisk, 3 earlier versions of those descriptors and one descriptor for a snapshot.
With that files I deduced that the content of the datastore must have looked like this:
a 2k3-VM with a 200gb disk + one snapshot
a 2k8 R2 VM with a 200gb vmdk
a 2k8 R2 VM with a 150gb vmdk
Next I searched the filedescriptor-section but only found the hidden sf files.
At this stage a first solid prognosis of the chances for a successful recovery can be made.
In this case the result was:
3 VMs were on the datastore: all small descriptorfiles were found
2 thick flat files missing, one thin flat file missing and one delta of unknown size missing.
Without the entries in the filedescriptorsection no easy automatic recovery is possible.
The customer then asked for the 2k3 VM with the thin basedisk and the snapshot.
I told him that I would try but that the result would be unpredictable.
After a search for the inodes I then found 4 large orphaned files with a high fragmentation rate.
And to my surprise I really could extract the 4 large files.
Even more surprising - the thin vmdk could be extracted with 120000 dd-commands - the delta needed 71000 dd-commands, the two thick vmdks needed 51000 and 7100 dd commands.
I had not expected any good results for the thin vmdk - but when I tested it - it survived the first boot and not even required a checkdisk. The snapshot was accepted as valid when I attached it to the thin vmdk. the two thick vmdks also looked good.
So until this point everything worked as expected - and actually much better than expected.
The really strange results that made me raise this question appeared when I started to check the thin flat with the snapshot.
- the basedisk alone looked absolutely healthy - a typical 2k3 server was bootable without errors - the state was from 2010 ! - so the snapshot must have been in use for 6 years.
- after I attached the snapshot the 2k3 server no longer was bootable and after checking with a LiveCD I found that the NTFS-partition was completely damaged - the MFT was completely missing.
This result was surprisingly bad - so I hexdumped the delta and found that it included a MFT.
Surprised by this result I tried the same snapshot attached to a new, fake basedisk that I created with
dd if=/dev/zero bs=1M count=1 of=fake-basedisk-flat.vmdk
dd if=/dev/zero bs=1M count=1 of=fake-basedisk-flat.vmdk seek=$((200*1024)) conv=notrunc
This fake basedisk I then quick formatted with NTFS.
Basically I tried to attach a delta that I considered as damaged to a new basedisk with an empty but new NTFS filesystem.
I expected a similar result as with the original basedisk but what I saw next completely took my by surprise.
Instead of a damaged NTFS partition a complete 2k3 system appeared and the files had timestamps ranging from 2010 upto June 2016.
A quick check of the health state of the filesystem came up as 180000 good files, 27000 good folders versus 5 bad files and 20 bad folders.
The damaged files were located in "program files\vmware\vmware tools\guest sdk\ and Windows\Pchealth\helpctr\
Another big surprise was the timestamp of the $MFT which was from 2010 !
The big questions that now come up are:
Why are the results of the 2 attempts to read one delta so very different:
total corruption when I use delta + original flat
surprisingly good when I use delta + new fake basedisk

 

Does it make sense to carve out a thin vmdk with  120000 dd commands that neither have any line referencing /dev/zero nor any gaps ?
- do I have to assume that the assumption to have a thin vmdk is wrong ?
- do I have to assume that my 120000 dd commands have errors ?
- can the line ddb.thinProvisioned = "1" in a descriptorfile be regarded as unreliable when checking wether the state of a vmdk is thick or thin ?

 

Has anybody ever tried to manually extract data from a delta.vmdk with a series of dd-commands and a list of offsets ?

Does anybody know reliable checks to check wether an unnamed delta.vmdk belongs to another unnamed flat.vmdk ?

I am not sure if I managed to explain the case good enough - if you have any theory that could explain the strange behaviour - please let me know.

 

Thanks
Ulli

 

Maybe the inode-details are useful ?
This is the delta

03809804-InodeIDdec=58759172           

03809804-InodeIDhex=03809804

03809804-InodeID2=  15

03809804-InodeNLink=   1

03809804-InodeType=   3

03809804-InodeType=file

03809804-InodeFlags=   0

03809804-InodeSize=3490072576

03809804-InodeBlkSize=1048576

03809804-InodeBlkCount=199937

03809804-InodeMT="Wed Aug 10 17:29:41 CEST 2016"

03809804-InodeCT="Tue Jun 21 23:52:18 CEST 2016"

03809804-InodeAT="Wed Aug 10 17:29:41 CEST 2016"

03809804-InodeUid=00000000

03809804-InodeGid=00000180

03809804-InodeMode=00000003

03809804-InodeZLA=00000000

03809804-InodeTBZ=00000000

03809804-InodeCow=00030d01

03809804-InodeBlocks=00000000

03809804-InodeRDM_id=00000000

03809804-InodeContent=00000000

This is for the flat.vmdk:

04c09804-InodeIDdec=79730692           

04c09804-InodeIDhex=04c09804

04c09804-InodeID2=  20

04c09804-InodeNLink=   1

04c09804-InodeType=   3

04c09804-InodeType=file

04c09804-InodeFlags=   0

04c09804-InodeSize=   0

04c09804-InodeBlkSize=1048576

04c09804-InodeBlkCount=204800

04c09804-InodeMT="Wed Jun 22 07:43:03 CEST 2016"

04c09804-InodeCT="Wed Jun 22 07:43:03 CEST 2016"

04c09804-InodeAT="Wed Jun 22 08:39:42 CEST 2016"

04c09804-InodeUid=00000000

04c09804-InodeGid=00000180

04c09804-InodeMode=00000003

04c09804-InodeZLA=00000000

04c09804-InodeTBZ=00000000

04c09804-InodeCow=00032000

04c09804-InodeBlocks=00000000

04c09804-InodeRDM_id=00000000

04c09804-InodeContent=00000000

Re: New SSD Not Showing in vCenter or ESXi

$
0
0

As you have not yet created a vmfs volume on the new drive, it will not automatically appear as a datastore.  Can you try adding new storage and follow the wizard?

Fresh install of ESXi on VCenter managed host with existing VMFS datastore via iSCSI

$
0
0

I want to do a fresh install of ESXi on a host which is already part of a an existing cluster in VSphere.  This host is connected to a VNXe 3XXX via iSCSI/Unisphere & has access to several datastores thru VCenter & the host access has been configured using the VMWare wizard in UniSphere.  When I start the installation, I get 3 choices: 1. Force Migrate ESXi, preserve VMFS datastore,  2. Install ESXi & preserve the VMFS datastore or 3. Install ESXI & overwrite the VMFS datastore.  What I'm not sure about is if I choose option 3 to overwrite the VMFS datastore, is that only going to overwrite the Data IQNs assigned to this particular Host?   I want to make sure I don't overwrite anything except what relates to this particular host.  Would it be better for me to disconnect the host from the SAN before I do the installation?  But then how do I make sure I overwrite any files associated with this host that may be on the SAN.  Would I first "remove" the host from vSphere and then go into UniSphere & remove the host so that there is no longer any host access?

 

install choices screen.PNG

 

Thanks for any input.

 

Message was edited by: joneschik

Re: Fresh install of ESXi on VCenter managed host with existing VMFS datastore via iSCSI

$
0
0

My recommendation is if you're not sure, disconnect the host from SAN before start the upgrade.

 

Here are some steps to prepare for the upgrade from the following KB article: Best practices to install or upgrade to VMware ESXi 6.0 (2109712) | VMware KB

 

To prepare your system for the upgrade:

  1. Check if the version of ESXi or ESX you are currently running is supported for migration or upgrade. For more information, see the Supported Upgrades to ESXi 6.0 section in the vSphere Upgrade Guide.
  2. Check the VMware Compatibility Guide to ensure that your host hardware is tested and certified as compatible with the new version of ESXi. Check for system compatibility, I/O compatibility (network and HBA cards), and storage compatibility.

    Note: It is not recommended to upgrade a host with hardware that is not certified for use with ESXi 6.0. If your host model is not on the VMware Compatibility guide, VMware recommend you contact your hardware vendor, and check if they plan to support your hardware devices on ESXi 6.0.

  3. Ensure that sufficient disk space is available on the host for the upgrade or migration. VMware recommends a minimum of 50 MB free disk space on the installation disk of the host you are upgrading.
  4. If you use remote management software to interact with your hosts, ensure that the software is supported and the firmware version is sufficient. For more information, see the Supported Remote Management Server Models and Firmware Versions section in the vSphere Upgrade Guide.
  5. If a Fiber Channel SAN is connected to the host, detach the fiber connections before continuing with the upgrade or migration. Do not disable HBA cards in the BIOS.
  6. Ensure you have sufficient access to VMware product licenses to assign a vSphere 6.0 license to the hosts post upgrade. After the upgrade, you can use evaluation mode for 60 days. For more information, see the Applying Licenses After Upgrading to ESXi 6.0 section in the vSphere Upgrade Guide.
  7. Back up the host before performing an upgrade. If the upgrade fails, you can restore the host.

 

The KB talks about the Fiber Channel SAN, but the same apply to iSCSI SAN.

Re: New SSD Not Showing in vCenter or ESXi

$
0
0

Hi Vfk,

 

Thanks for your reply.  I ended up having to go into BIOS and initialize the drive for RAID 0.  All is well now. 

Viewing all 5826 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>