Hi continuum,
I have attached the compressed file. Please have a look and let me know if you come up with anything. And thanks for your suggestion, really appreciate that.
Hi continuum,
I have attached the compressed file. Please have a look and let me know if you come up with anything. And thanks for your suggestion, really appreciate that.
Hi all I keep unchecking ' Keep VMDKs together' on my main cluster, but it turns it back on again randomly and then everything space alerts when it puts them all together again
Looks like it is a known issue from the below, but the fix doesn't work for me ( set in web client instead of local ) , has anyone else experienced this and know a fix?
I created some custom anti affinity rules and it even deleted them without a trace left..
vCenter server 6.0.0 enterprise 3339083
Esxi 6.0.0 3620759
Hi
sorry I dont have much time at the moment - I suggest that you call me via skype in a few hours.
I think I found
Ergo VM Server-flat.vmdk
Server 2008 R2 x64bit-flat.vmdk
Server 2008 R2 x64bit_1-flat.vmdk
with descriptorfiles.
The Ergo VM seems to have a snapshot which maybe recoverable.
All 3 vmdks are highly fragmented: 120.000 gragments, 80.000 fragments and 55.000 fragments so carving them out is not trivial - but possible.
Two 200gb vmdks and one of 150gb.
There maybe one more vmdk of about 200gb but thats inconclusive - ask me later.
Ulli
To answer my own question:
After some testing I can confirm that this is possible with some workarounds and limitations.
To clarify: Subject was VSS/VDS for HPE MSA 1040/2040.
Hello,
We have a HA storage(two nodes,multiple LUNs exported to a 3 nodes ESXi cluster with 6.0.0 by iSCSI as datastores) which was detected by ESXi satp as VMW_SATP_DEFAULT_AA, but actually it's not a Acitve/Active storage, its failover based a floating VIP,.
However when the ESXi and one of the storage node trigger failover, the rest ESXi becomes non-responsive, and the log in vmkernel.log shows that there's reserve confliction, and the ESXi lost response after a while:
1, How to change the satp configuration? or Does it implement in the storage software?
2, We found that from the storage side , when one storage node powered off, and the LUN failed over to the second node, the RESERVE/RELEASE cmd did not get matched.
virtualserver4 may i know how you solve at the end?
Hi,
we are now looking into security issues after deleting datastore, if information is still left intact or is everything deleted? After datastore deletion does all bits gets converted to 0s or is it left like it was before?
If there is anyone aware of what happens with data and could help that would be very helpful,
Thanks,
Laurynas
ESXi is quite careless about this: a fresh VMFS format does only destroy the basic structures in the metadata-area.It does not care about any stale data that still populates the area used for vmdks.
With a root shell an attacker can actively import that left over garbage into active VMs.
Any selfrespecting administrator will make sure he does completely wipe the complete datastore before reformatting it.The side-effect of this is very welcome if you ever have to carve out an important vmdk with dd.
The time you safe during creation of new lazyzeroed vmdks is nothing when you add the extra pain that the extra checkdisk you have to do while recovering that vmdk.So if security is a concern wipe every new datastore with zeros.
The original post is from 2014 - at that time it was not a popular problem.
Today it is different and happens way more often.
I dont know how OP worked aound it ... I nowadays have 3 quite obscure procedures that make sense.
Nasty thing with this particular problem : typically everybody will consider this issue as something hardware related.
So VMware support usually gets away with showing you some vmkernel-logs. So the support-case you may have opened gets relabelled as "Recovery-case and per definition VMware supports can run away again and so once the problem "flat.vmdks becomes unreadable with I/O error" can be blamed to an undefined hardware issue - the chance that the enduser gets help with the recovery goes down the drain.
My personal statistic for what I call the ESXi-Diva-mode problem is not95% hardware - 5% ESXi-VMFS-problem but more like 60% ESXi-VMFS-issue and only 40% hw related.
Anyway - all cases that I managed to resolve in the last few weeks were done by using a Linux-LiveCD-VM and carefully avoiding everything that could make ESXi think that a normal host from the cluster tries to read the files with I/O errors.
When possible read the files with the I/O-errors not from /vmfs/volumes but rather from Linux in ro only mode in /mnt/esxi/dev/disks/<device>.
Instead of copying a flat.vmdk I recommend to copy a set of pieces you specify as a long list with dd-commands.
When that also fails use a vmfs-fuse build - but not the outdated one from Ubuntu and other repositories.
Sorry - I see that this does not really help you - but at the moment I am not aware of a straightforward procudure that works in the majority of cases.
When I found one I will write instructions ....
Questions - see my signature
Ulli
I'm trying to get some advice on how to best set up an NFS server to use with ESXi as a datastore. I took a stab at it with CentOS 7, but the performance is abysmal. I'm hoping someone can point out some optimization that I've overlooked, but I'm open to trying another free OS as well.
I have an old Dell PowerEdge T310 with a SAS 6i/r hard drive controller. I have two 2 TB hard drives and two 1 TB hard drives. Due to the limitations of the SAS 6i/r controller, I have left the drives independent and went with software RAID 1 + LVM to get 3 TB of usable space like this:
# mdadm --create /dev/md0 --run --level=1 --raid-devices=2 /dev/sdd /dev/sde
# mdadm --create /dev/md1 --run --level=1 --raid-devices=2 /dev/sdf /dev/sdg
# vgcreate vg0 /dev/md0 /dev/md1
# lvcreate -l 100%VG -n lv0 vg0
Then I formatted the new LVM partition with XFS:
# mkfs.xfs /dev/vg0/lv0
I mounted this at /var/nfs and exported it with the following options:
# cat /etc/exports
/var/nfs 192.168.10.3(rw,no_root_squash,sync)
I was able to add this to my ESXi host using the vSphere Client as a new datastore called nfs01.
I then edited my VM through the vCenter web interface, adding a new 2.73 TB disk.
The guest OS is Windows Server 2012. Through the Disk Management interface, I initialized the disk GPT and created a new volume. This took several minutes. Then I tried quick formatting the volume with NTFS. I cancelled this after about 4 hours. I then shrunk the volume to 100 MB and formatted that instead. That succeeded after several minutes, but just creating a blank text document on this drive takes about 8 seconds.
The NFS server is plugged into the same gigabit switch as the ESXi server. Here are the ping times:
~ # vmkping nfs.qc.local
PING nfs.qc.local (192.168.10.20): 56 data bytes
64 bytes from 192.168.10.20: icmp_seq=0 ttl=64 time=0.269 ms
64 bytes from 192.168.10.20: icmp_seq=1 ttl=64 time=0.407 ms
64 bytes from 192.168.10.20: icmp_seq=2 ttl=64 time=0.347 ms
I ran an I/O benchmark tool and got these results: Imgur: The most awesome images on the Internet
At the same time vCenter showed this performance data for the datastore: Imgur: The most awesome images on the Internet
I noticed that some I/O operations done locally on the NFS server are also slow. For example I can run "touch x" and it completes instantly, but if I run "echo 'Hello World' > x" it can take anywhere from 0 to 8 seconds to complete.
This is my first attempt at using NFS (my two ESXi hosts use local storage) so I'm not sure if any of this is normal.
Hi,
I have attached data dump of another LUN as per our discussion. Please have a look.
Thanks
I figured out what was causing my issue: I didn't initialize the software RAID with the --asume-clean option. My arrays were resyncing the whole time.
My new virtual disk is now performing as expected, although I'd still be interesting in hearing people's opinions on optimizing the setup.
I may be able to extract CN-MEL-BB-WinShare_1-flat.vmdk. (4tb)
and CN-MEL-BB-WinShare-flat.vmdk
# Disk DescriptorFile
version=1
encoding="UTF-8"
CID=973d99a0
parentCID=ffffffff
isNativeSnapshot="no"
createType="vmfs"
# Extent description
RW 31457280 VMFS "CN-MEL-BB-WinShare-flat.vmdk"
# The Disk Data Base
#DDB
ddb.adapterType = "lsilogic"
ddb.geometry.cylinders = "1958"
ddb.geometry.heads = "255"
ddb.geometry.sectors = "63"
ddb.longContentID = "f12aad0d484b07c7e403bac2973d99a0"
ddb.toolsVersion = "9344"
ddb.uuid = "60 00 C2 9f 15 4f ed 6b-22 a5 df bc 04 fc b9 4d"
ddb.virtualHWVersion = "8"
# Disk DescriptorFile
version=1
encoding="UTF-8"
CID=232ce3d9
parentCID=ffffffff
isNativeSnapshot="no"
createType="vmfs"
# Extent description
RW 8388608000 VMFS "CN-MEL-BB-WinShare_1-flat.vmdk"
# The Disk Data Base
#DDB
ddb.adapterType = "lsilogic"
ddb.geometry.cylinders = "522166"
ddb.geometry.heads = "255"
ddb.geometry.sectors = "63"
ddb.longContentID = "7ff6ee3f0705632c582244a4232ce3d9"
ddb.toolsVersion = "9344"
ddb.uuid = "60 00 C2 95 b9 57 74 c6-77 ad e6 c6 b5 70 70 83"
ddb.virtualHWVersion = "8"
rcporto, Thanks for the article.
So if I'd like to setup NIC teaming for the ethernet connection, which technology is better in terms of throughput to take between NFS and iSCSI ?
I've used the following loads of times and it works
vmkfstools -i /vmfs/volumes/datastore/RDSSVR/RDSSVR.vmdk -d thin /vmfs/volumes/RD1000/RDSSVR/RDSSVR.vmdk
That works on SAN or local disk.
Have you checked in the databrowser to see how much disk space the new vmdk takes?
Hi AlbertWT,
Create LUNs with RAID10, Map to Targets at storage side and access using iSCSi. It will give you an awesome throughput.
Hi,
Reboot the Array and ESXi...I was having the same problem but resolved by rebooting
There is a way to recover the file just before executing the command
Sorry - we do not have enough information to help you.
In your original post I only noticed that your vmdk-filenames dont follow the regular conventions - this can be a serious problem or just carelessness when you posted the problem.
Provide more details please