Quantcast
Channel: VMware Communities: Message List - vSphere™ Storage
Viewing all 5826 articles
Browse latest View live

VMFSKTools usage help for white space reclaiming of netapp storage

$
0
0

Hi all,

 

I hope you can help with a couple of questions below as we're wanting to reclaim some white space from our filers but I've got various bits of info from other sources so would like a bit of clarification;

 

1 - Using the tool I keep reading you can release xx% of your free space, so does this mean if I think I can reclaim 25% of my total storage of 4TB, then can I run the command with the -y 25 and reclaim 1TB or will it reclaim 256GB (i.e. 25% of 1TB)? Can this reclamation process keep being done until I get close (not all I understand) to reclaiming the white space?

2 - I have taken the warnings of not trying to reclaim 100% of free space because you can fill up your storage during the process so can I tell vmkfstools to use an external HDD/Storage area to run the reclamation? i.e. old server with several large HDDs all formatted to required protocol.

As for everything else recommended or advised we are ready to go, we're all just a bit unsure of how often this can be done, how close to the full size we can go and very importantly the last part above - Can we use an external source to do the work of the white space reclamation, i.e. like a page file swap system?

 

Many thanks for any help you can provide,


Daiman


Too Many VMDK files

$
0
0

Hello All,

 

I have a server with an odd issue that i need some assistance with.  This is a security camera server that keeps locking up due to a hard drive issue.  We have a data drive assigned as the F drive and from time to time it will get to a point that the server locks up.  While troubleshooting i find that the F drive cannot be accessed.  The quick fix is for me to just remove that drive and add another one.  Problem now is that there is a Flat file and 4 or fie delta files to a disk that no longer is there. I placed the new drive in another datastore. There is also a VMDK file that has an odd name ...camera-ctk.vmdk.  There is no hard drive with that name.   Can these be removed with no issues or is there a specific way to go about getting rid of them? 

Re: Too Many VMDK files

$
0
0

If the flat and delta file are not longer needed you can certainly remove them. However, if you are not 100% sure, it might be a good idea to move the files to a new/temporary directory to see whether everything continues to work as expected, and then - at a later time - delete them. The -ctk.vmdk file is file that's used for changed block tracking (CBT),. One of these files exist for each virtual disk where CBT is enabled. You can delete this file if the associated virtual disk doesn't exist anymore.

 

André

SAN Fabric for VMware

$
0
0

I on a contract now and the group I'm working for is looking at a new SAN fabric design, and there are many considerations for this proposed design that has left me with some unanswered questions that I could use some expert knowledge.

 

There are many hops between the vSphere hosts, which are hp blades, the data will leave the blade and travel through the following path:

 

1) FCP out of the chassis via a brocade san module

2) into patch panel at top of rack

3) out of patch in another rack to a cisco 5k

4) FCP becomes FCoE out of this 5k and travels into another patch panel

5) FCoE out of patch panel into cisco 7k and then into patch panel

6) FCoE out of patch panel into another 5k where it then reverts back to FCP and enters patch panel

7) FCP out of patch panel into Storage array

 

I am concerned with the # of hops, even if they are just patch panel without live electronics

I am also concerned with the FCP to FCoE and back to FCP

 

All this movement for a VM to see its storage....

 

Is this design going to kill the performance of the new storage?   I don't need recommendations on a new design, I need to understand whether the # of hops and the FCP to FCoE is a bad move and why..

Re: SAN Fabric for VMware

$
0
0

there will be a slight lag (in micro secs) since the number of hops are more in this design.

 

cant we eliminate these two hops? A 5k shared between storage patchpanel and the server patch panels.

 

5) FCoE out of patch panel into cisco 7k and then into patch panel

6) FCoE out of patch panel into another 5k where it then reverts back to FCP and enters patch panel

Re: SAN Fabric for VMware

$
0
0

Yes, we can do what ever it is we need to do. good to know patch panel hops are negligible    The multiple hops is a result of the layout in the DC and I have already brought this up and there is a plan in the works now since about 1 hour ago to run some fiber as needed.

 

Still wondering about the FC to FC0E and back again..  as well as the fact we have brocade san modules that will interface with Cisco 5ks.   sure it will work but just wondering about potential complications. 

ESXi 5.5 + Dell Equallogic low read IOPS

$
0
0

I have performance issues on my infrastructure. I have connected array Dell Equallogic 6100 via SAN (1Gb/s) to my ESXi server. EQL is filled with SATA 7200rpm HDDs.

I created new lun with one test machine. When I tried IOPS benchmark on this machine, I got 300 IOPS for read and 1400 IOPS for write.These values are stable for every test.

When I tried the same test on pyhsical server connected to the same EQL, I got 750 IOPS for read and 1600 IOPS for write.

I made theese tests when the array is without load (20-30 IOPS per hole array).

 

I tested for 512bytes block. If I switch benchmark to 4K, I got only 200 IOPS for read.

 

I tried MPIO pathing fixed, Round Robin (set limit 1 IOPS instead of default 1000), thin or thick VMDK drive, but everything were the same. I also checked the switch ports configuration, which were made regarding to best practise.

Latency from ESXi to EQL is very low.

When I looked to the SAN HQ, I got same numbers like in benchmark test.

 

My question is, why the vritual machine has so low read IOPS performance?

Re: ESXi 5.5 + Dell Equallogic low read IOPS

$
0
0

Hi,

 

did you try installing the Multipathing Extrension Module (MEM)from DELL?

 

It's available at eqlsupport.dell.com unter Downloads -> VMware Integration.

 

It has to be installed on the ESXi hosts and will bring you a new Path Selection Policy, which increases the performance quite a bit compared to the default PSPs (at least in Equallogic groups with multiple members).

 

 

Tim


Re: ESXi 5.5 + Dell Equallogic low read IOPS

$
0
0

I didn't try to install MEM extension yet. I expected the highest throughput with VMware RoundRobin path after change default settings. Physical windows server (2012R2) has set fixed path.

Difference between physical and virtual machine is very big. I found some tests for RR and MEM, but difference is not that big.

Runtime Name associated with vmk

$
0
0

Is there a way to tell if a Runtime Name such as vmhba33:C0:T1:L3 is assigned to a particular kernel port? We are connecting to a NetApp VIF ip address and I have 4 vm kernel ports so I am seeing 4 paths. I want to disable a few paths but don't know how to tell which Runtime is associated with which kernel port. Does a reboot or a rescan possibly change which path is assigned which channel number?

verifying q depth of HBA

$
0
0

Following the instructions in KB 1267:  VMware KB: Changing the queue depth for QLogic, Emulex, and Brocade HBAs  the command strings to change the queue LUN queue depth of the HBA listed, along with the string to verify your changes.

 

When attempting to verify recent changes to the queue depth using the command I am turning up nothing but blank values in the output.

 

Here is cut and paste from KB:

 

Run this command to confirm that your changes have been applied:

 

# esxcli system module parameters list -m driver

 

Where driver is your QLogic, Emulex, or Brocade adapter driver module, such as lpfc820, qla2xxx, or bfa.

 

The output appears similar to:

 

Name                        Type  Value  Description
--------------------------  ----  -----  --------------------------------------------------
.....
ql2xmaxqdepth               int   64     Maximum queue depth to report for target devices.

 

HOWEVER - I'm not getting values returned, the example in the KB shows 64, but when I run the command on my hosts it is blank.

 

Is this a new bug?  does anyone know an alternate way to retrieve the values?

Re: verifying q depth of HBA

$
0
0

 

Most likely you don't see any outuput, could be due to INBOX drivers.

 

Run this command and check the value in Device queue depth = 0x40 which will be HEX value

 

/usr/lib/vmware/vmkmgmt_keyval/vmkmgmt_keyval -a

 

VMFS-3.46 file system

Re: Runtime Name associated with vmk

$
0
0

Are you doing some sort of network switch maintenance? Or reconfiguring network ports on the storage array? We need to know more information on where exactly the outage is going to be to give a sensible advice.

Re: Runtime Name associated with vmk

$
0
0

Run esxcfg-mpath –b  >>this will give active/dead paths to the storage


Re: VMFS-3.46 file system

$
0
0

yes, VMFS 3.46 does have support of VAAI primitives.

 

My upgraded VMFS-5 does not have a 1MB block size. Why?


Upgraded VMFS-5 partitions will retain the partition characteristics of the original VMFS-3 datastore, including file block-size, sub-block size of 64K, etc. To take full advantage of all the benefits of VMFS-5, migrate the virtual machines to another datastore(s), delete the existing datastore, and re-create it using VMFS-5.

Note: Increasing the size of an upgraded VMFS datastore beyond 2TB changes the partition type from MBR to GPT. However, all other features/characteristics continue to remain same.

In short: If upgrade is the only option for you, then you won't get all the advantages of new filesystem.

VMware KB: Frequently Asked Questions for vStorage APIs for Array Integration

Re: VMFS-3.46 file system

$
0
0

Thank you Narendra.

Do you know which version onwards has started VAAI support?

Also if you look at this KB: VMware KB: Frequently Asked Questions for vStorage APIs for Array Integration upgraded VMFS-5 table says it will support ATS but fall back to SCSI-2.

I am confused with this statement, does this mean ATS will be completely unavailable for upgraded file system and goes back to SCSI-2 reserve and release commands (a.k.a SCSI-2 reservations) were used for reserving and releasing volumes.

Re: VMFS-3.46 file system

$
0
0

It started with vSphere 4.1 onwards, and VMFS 3.46 was the first version to support VAAI.

 

1.    Acquire on-disk locks

2.    Upgrade an optimistic lock to an exclusive/physical lock.


In vSphere 4.0, VMFS-3 used SCSI reservations for establishing this critical section as there was no VAAI support in that release. In vSphere 4.1, on a VAAI-enabled array, VMFS-3 used ATS only for operations (1) and (2) above, and ONLY when disk lock acquisitions were un-contended. VMFS-3 fell back to using SCSI reservations if there was a mid-air collision when acquiring an on-disk lock using ATS.

Above quote has been taken from VMFS Locking Uncovered - VMware vSphere Blog - VMware Blogs

 

here meaning of fall back to SCSI reservation is, that on an upgraded VMFS-5 datastore, system will try to make use of ATS, but if contention is high, that ATS might be unsuccessful and system will fall back to SCSI Reservation. this same is true for VMFS 3.46 or higher as well. But if the datastore is a freshly created VMFS-5 then ATS only is going to be used with VAAI aware array systems no matter how high contention is.

Re: VMFS-3.46 file system

$
0
0

Thank you Naredra. Very useful info.

Re: DataStore 2 VVOL Migration Problem

$
0
0

We recently had the same problem with around 40 VMs (in an environment of ~190 VMs) as we transitioned to VVOLs.

 

A lot of them were P2V'd or came from Xen systems years ago.

 

I discovered that the Hard disks assigned to the VMs were often not aligned to GB boundaries.

 

For example, one system I migrated today (which had previously failed with the error you are reporting) had a Hard disk size of: 18.1378173828125 GB

 

I modified it to 20 GB, then repeated the migration and it worked... I have since repeated this procedure for a dozen more VMs.

 

For the record, we are using an Equallogics iSCSI 10GBE SAN.

Viewing all 5826 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>