Quantcast
Channel: VMware Communities: Message List - vSphere™ Storage
Viewing all 5826 articles
Browse latest View live

Re: RAID card for Mac Pro 5.1 internal hard drives.

$
0
0

hey - any update on this? Have you found one?


Maximum LUN size. Maximum VMFS volume size.

$
0
0

Hi everyone!

 

Do you have an example of a LUN bigger than 16TB connected to your vSphere? Maybe 32TB? Do you have a VMFS 5 or 6 volume bigger than 16TB?

 

According to the information I have found the limitation of VMFS volume is 62TB. Unfortunately I cannot find the confirmation that a storage system can present to vSphere a LUN bigger than 16TB so that I could create the big VMFS volume.

There is a known 16TB LUN size limitation from NetApp. But what about other vendors?

 

Thanks in advance.

Re: Maximum LUN size. Maximum VMFS volume size.

$
0
0

Unfortunately I cannot find the confirmation that a storage system can present to vSphere a LUN bigger than 16TB ...

There are indeed still storage system with a 16TB maximum LUN size. I assume that Netapp's 16TB maximum is due to a file system limitations on which they store the LUNs.

Anyway, the storage vendors usually have the max. LUN size for supported operating systems in their documentation, and even less expensive systems support LUN sizes with 256TB, or more.

 

André

HP P2000 G3 SAS Multipath Configuration with Vmware HOSTS

$
0
0

Hi , we are having a weird problem with out HP SAN Setup with our VMware hosts , we have our san connected via 2 controllers ( A and B ) into each VMWare host.

 

Controller A port 1 goes to vmware hostA

Controller B port 1 goes to vmware hostA

 

Controller A port 2 goes to vmware hostB

Controller B port 2 goes to vmware hostB

 

Controller A port 3 goes to vmware hostC

Controller B port 3 goes to vmware hostC

 

when we fail over 1 controller on the SAN ( by rebooting the controller ) , it causes the vmware host to completely lose connection to our storage instead of failing over to the connection on the other controller on the SAN and we can only get it back by rebooting the vmware host causing downtime.

 

Our vmware configuration is shown in the attached screenshots.

 

thanks in advance and kind regards

 

output of ( esxcfg-mpath --list )

 

~ # esxcfg-mpath --list

  1. usb.vmhba32-usb.0:0-mpx.vmhba32:C0:T0:L0

   Runtime Name: vmhba32:C0:T0:L0

   Device: mpx.vmhba32:C0:T0:L0

   Device Display Name: Local USB Direct-Access (mpx.vmhba32:C0:T0:L0)

   Adapter: vmhba32 Channel: 0 Target: 0 LUN: 0

   Adapter Identifier: usb.vmhba32

   Target Identifier: usb.0:0

   Plugin: NMP

   State: active

   Transport: usb

 

  1. sata.vmhba0-sata.0:0-mpx.vmhba0:C0:T0:L0

   Runtime Name: vmhba0:C0:T0:L0

   Device: mpx.vmhba0:C0:T0:L0

   Device Display Name: Local hp CD-ROM (mpx.vmhba0:C0:T0:L0)

   Adapter: vmhba0 Channel: 0 Target: 0 LUN: 0

   Adapter Identifier: sata.vmhba0

   Target Identifier: sata.0:0

   Plugin: NMP

   State: active

   Transport: sata

 

  1. sas.500605b0045cce00-sas.500c0ff135bcc000-naa.600c0ff00013989d0000000000000000

   Runtime Name: vmhba2:C0:T1:L0

   Device: naa.600c0ff00013989d0000000000000000

   Device Display Name: HP Serial Attached SCSI Enclosure Svc Dev (naa.600c0ff00013989d0000000000000000)

   Adapter: vmhba2 Channel: 0 Target: 1 LUN: 0

   Adapter Identifier: sas.500605b0045cce00

   Target Identifier: sas.500c0ff135bcc000

   Plugin: NMP

   State: active

   Transport: sas

   Adapter Transport Details: 500605b0045cce00

   Target Transport Details: 500c0ff135bcc000

 

  1. sas.500605b0045cce00-sas.500c0ff135bcc000-naa.600c0ff00013989dc2164e4f01000000

   Runtime Name: vmhba2:C0:T1:L1

   Device: naa.600c0ff00013989dc2164e4f01000000

   Device Display Name: HP Serial Attached SCSI Disk (naa.600c0ff00013989dc2164e4f01000000)

   Adapter: vmhba2 Channel: 0 Target: 1 LUN: 1

   Adapter Identifier: sas.500605b0045cce00

   Target Identifier: sas.500c0ff135bcc000

   Plugin: NMP

   State: active

   Transport: sas

   Adapter Transport Details: 500605b0045cce00

   Target Transport Details: 500c0ff135bcc000

 

  1. sas.500605b0045cce00-sas.500c0ff135bcc400-naa.600c0ff0001395430000000000000000

   Runtime Name: vmhba2:C0:T0:L0

   Device: naa.600c0ff0001395430000000000000000

   Device Display Name: HP Serial Attached SCSI Enclosure Svc Dev (naa.600c0ff0001395430000000000000000)

   Adapter: vmhba2 Channel: 0 Target: 0 LUN: 0

   Adapter Identifier: sas.500605b0045cce00

   Target Identifier: sas.500c0ff135bcc400

  Plugin: NMP

   State: active

   Transport: sas

   Adapter Transport Details: 500605b0045cce00

   Target Transport Details: 500c0ff135bcc400

 

  1. sas.500605b0045cce00-sas.500c0ff135bcc400-naa.600c0ff00013989dc2164e4f01000000

   Runtime Name: vmhba2:C0:T0:L1

  Device: naa.600c0ff00013989dc2164e4f01000000

   Device Display Name: HP Serial Attached SCSI Disk (naa.600c0ff00013989dc2164e4f01000000)

   Adapter: vmhba2 Channel: 0 Target: 0 LUN: 1

   Adapter Identifier: sas.500605b0045cce00

   Target Identifier: sas.500c0ff135bcc400

   Plugin: NMP

   State: active

   Transport: sas

   Adapter Transport Details: 500605b0045cce00

   Target Transport Details: 500c0ff135bcc400

 

~ #

Re: HP P2000 G3 SAS Multipath Configuration with Vmware HOSTS

$
0
0

Moderator: Moved to vSphere Storage

Storage recommendations for small 3 host vcenter environment

$
0
0

Hi, I recently took over a half-well done vmware environment and need some direction.

 

There are 3 servers currently:

host 1 - single proc supermicro, runs vcsa (6.7)

host 2- dual proc supermicro, runs several vm's, locally (6.5)

host 3 - dual proc supermicro, runs several vm's, locally (6.7)

 

Total storage is about 1TB on all of the vm's combined.

 

All vm's running are in production and need to be up 100% of the time (dc's, puppet, dev, veeam, etc)

 

Budget = $15k give or take.

 

Options:

1 - Should I updated local storage on each ESX host and ram and call it a day?

2 - Should I purchase a shared DAS (iscsi) storage, if so, which one? vendor websites are annoying to get to the meat and potatoes (prices) without jumping through the sales team and days of waiting.

3 - Since total storage use is low (don't ever see it going above 5 TB), should I opt for all-flash DAS?

4 - What do you guys recommend?

 

Ideally, I'd like to get to a point where all hosts are on same esxi, vcenter can do vmotion and manage everything, and all vm's are running on fast, local, flash storage so esxi hosts can fail but the vm will get migrated to a different host. (this will also help me use update manager and get things updated when needed)

 

Thoughts?

 

I'm open to all good ideas.

 

Thanks.

Re: Storage recommendations for small 3 host vcenter environment

$
0
0

Some additional information.

 

We are running 18 vm's

We have 10gig networking in place

The hosts each have dual port 10gb cards

Re: Storage recommendations for small 3 host vcenter environment

$
0
0

what is the load on the servers?

why are you looking to spend money on a working system, what are you trying to fix?


Re: Storage recommendations for small 3 host vcenter environment

$
0
0

Load on each system is roughly 10% cpu, 70% ram. (64gb ram each, will be upgrading to 128gb each soon)

 

Goal is to be able to survive 1 host failure and have essential infra vm's continue to run.

 

A short time after I started, one of these vm hosts locked up / purple screened and that's when I started to dig into how it's set up. There is no real redundancy or failover right now. I'm just trying to get essential services to continue to run in the case that a host fails.

Re: [vCenter 6.7u3] Analysing SIOC Activity Events with Log Insight

$
0
0

Moderator: Moved to vRealize Log Insight

Re: Storage recommendations for small 3 host vcenter environment

$
0
0

Moderator: Moved to vSphere Storage

Re: Storage recommendations for small 3 host vcenter environment

$
0
0

Hello RadiatedBirds  My suggestion would be to purchase a DAS storage,  my friend has lab with the same setup and he uses MD3200 Dell

 

The MD3200 has dual controller with 8 direct SAS 6GB ports that you can connect redundantly to 4 hosts. Best of all, you get the massive IO of direct attached storage

 

hope it helps

Re: [vCenter 6.7u3] Analysing SIOC Activity Events with Log Insight

$
0
0

Why was the thread moved? Storage IO control is a vSphere / vSphere Storage feature, how is this relevant to the Log Insight??

[vCenter 6.7u3] Analysing SIOC Activity Events

$
0
0

Hello,

I want to look up, when Storage I/O Control started to throttle IOps on hosts in an ESXi Cluster.

For that I analyse the storageRM logfile and I think I found the correct events, here is an example:

2020-01-08T12:14:24Z hostfqdn.local storageRM[2100418]: Throttling anomaly VOB for naa.id: 59, 0.203814

 

Can someone please tell me, for what the red marked values that are named right after the naa id are representing. At first I thought the first value ist the set maximum queue depth (DQLEN), however it sometimes it reaches value that are much lower (example: 3, 0.00112019) than what is shown in the performance metrics in the vsphere client, sometimes the value is much higher than the possible queue depth of 64 on the adapter (example: 168, 0.203217) - so maybe this is the execution throttle\queue depth? (see: http://qgt.qlogic.com/Hidden/support/Current%20Answer%20Attachments/VMware.pdf )

Does SIOC set a larger Queue Depth per LUN or Host, than the defautl maximum of 64?

 

edit2:

I try to compare following values:

- the ones from the storageRM log, like here: 2020-01-08T12:14:24Z hostfqdn.local storageRM[2100418]: Throttling anomaly VOB for naa.id: 59, 0.203814

- esxtop on the selected host: hostfqdn.local > disk device > check DQLEN value for disk device

- vsphere client - select Datastore - Performance - Hosts - Max Queue Depth per Host > check real time value for host hostfqdn.local

The values do not match.

 

edit: Change topic so that the thread is not moved again

Re: Storage recommendations for small 3 host vcenter environment

$
0
0

You can also attach multiple MD12XX units the the MD32XX and expand storage as you need to.


Mirroring two PCIe NVMe Drives using ESXi 6.5 on DL380 G10

$
0
0

I have a new DL380 G10 with two PCIe NVMe drives.  I have set up a Windows 2016 VM as a test, given it equal storage from each NVMe and set the two disks to be Dynamic and then set up Disk Mirroring in Windows on all partitions on disks 0 and 1.  I see two boot options when I start the VM - 1. Windows Server 2016 and 2. Windows Server 2016 - secondary plex - so I think the Windows mirroring is successful.

 

My aspiration is that if one NVMe drive fails I'll be able to easily recover and run this VM and others to be set up from the remaining good drive.

 

I've found the VMWare document "Set Up Dynamic Disk Mirroring" for SAN LUNs, a slightly different variation than my case, and I see that it says to add a couple of advanced options pertaining to the SCSI controller, returnNoConnectDuringAPD, and returnBusyOnNoConnectStatus.

 

In my case do I need to do this for the NVMe controller I added to the settings for this VM?

 

I also browsed the DataStores NVMe1 and NVMe2 (the not so colorful names I have given the two NVMe drives).  The ESXi metadata files - vswp, nvram, vmx, logs &c. -  only live in the folder for the VM on NVMe1.  The VM's folder on NVMe2 only has the vmdk file.

 

If NVMe1 were to fail can I recover with just the vmdk file on NVMe2?  If not, which files other than vmdk file are critical and is there a way to keep them in sync between NVMe1 and NVMe2, or would the occasional static copy do?

 

The team who will eventually use this server don't really care and say they can easily rebuild VMs if a drive fails and they might actually prefer to have the second NVMe drive available for more VMs.

 

But it offends my IT sensibilities not to try and set up some sort of RAID on this and be able to recover more quickly should a drive fail.

Re: Mirroring two PCIe NVMe Drives using ESXi 6.5 on DL380 G10

$
0
0

Moderator: Moved to vSphere Storage

Re: Mirroring two PCIe NVMe Drives using ESXi 6.5 on DL380 G10

$
0
0

I haven't received any replies so I tried removing each of the drives one at a time and see if I could start my Windows VM.  No luck.  Even with NVMe0 which includes all the ancillary VM files in the host and NVMe1 with only the mirrored VMDK file pulled out I was out of luck.

 

I guess it's time to consult VMWare Support.

Re: VMs Crashing on NFS Storage ... ESXi Kernel Log flodded with messages

$
0
0

IamTHEvilONE

Sorry to resurrect an old thread, but I am curious if you ever discovered the cause of this issue or found a resolution?  I am seeing the 'exact' same issue with NFS 4.1 datastores mounting from a Tegile.

Re: Snapshots on storage

$
0
0

Good afternoon!

I talked with a Dell specialist, and he said that they recommend performing snapshots on the SC3020. Since snapshots can speed up storage performance.
1. I wanted to clarify if I create snapshots for each Lun as described above and they will be stored only 24 hours. It will be right? (2 Luns are for storing backups, 2 Luns for storing non-critical virtual machines)
2. I have 4 Luns on DELL, all of them are created under VMFS. At the moment I have 3 Lun in the red zone. Well, that’s logical, as Thick Provision Lazy Zeroed was created in Lun data.
(Example 1Lun 3TB - 1 Thick Provision Lazy Zeroed 2.9 TB) But Thick Provision Lazy Zeroed itself is not fully occupied in the system. What to do or is this a normal practice?

Viewing all 5826 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>