Hello mates,
I know the old post but I have to add something from my experience and knowledge to this and other virtualization storage and performance discussions.
1. almost every virtual environment is highly critical, even sometimes this on labs and cheap at poor customers or gov (last years especially)...
2. in a lot of aspects and places where we can save some money ($$$/€€€)... we need to remember always that the most important for succesfull virtualization is a stable fast and redudant storage, why only a storage? just try to disable a storage and try to do anything...:P so the storage is most important, even slow stable storage will be a problem because during critical hours storage will be overloaded and will performance poorly and this will absolutely destroy not only a storage performance but a vm's itself performance because vmware pauses a cpu times for a vm that have long responce to storage, I saw it tens of times with too slow or bad configured or sized storage, its normal.
There are a lot of different storage vendors, hw and sw, also open source or "enterprise class" based on open source (like Nexenta, OpenFiler).
I test about 3-4 years a lot of cheap storage solutions for test enviroments, labs and cheap storage for small poor gov, hmm what i can say that what You should NOT use (for now about 2013.06 date):
- NOT use a Nexenta sw, even that Enterprise Edition, especially with cheam hdd/ssd, its based on ZFS which is really very hard to configure and it's tuned for real enterprise class hw (and specific hw), really not designed for huge random reads and simultanous writes to the storage other than specific ssd or fastest sas disks (15k), its add another layer that althrue you can use for some speed up same operations through LARC and L2ARC but it add another layer that also slows your storage dawn (latency) when storage is heavly loaded...:( yep, long tests shows it totally, ZFS is wonderfull solution for real enterprise like ssd drives and some fastest hdd drives but it's really not good solution for todays world (when we have for example hw raids), maybe in 5 years it will be also tuned for little cheaper hw so it may be a solution for "cheap enterprise" like gov, bot now in my opinion not, its too complicated, too much possiblilities to tune anything, too small documentation, too restricted to not using hw raids - if you use hw raid it will work but really slower and there are more problems how to tune it then, and it's in my opinion although its very advanced its not fit for virtualization solutions requirements and add another place when you will have troubles... and you know that troubled storage = totally critical problem for virtual enviroment, additionally ZFS is a good solution to pass tests, it's really optimized for passing tests, but real virtual world is far close from tests, and this is a problem
- also NOT to use cheap or free (in most cases) iSCSI/NFS targest/sw for virtualization storage because storage is most importance in a virtualization platform, when your storage crash whole your virtualization with a lot of hw servers and muych lot of vm's crash... it's something that every admin need to know and say it everyday as mantra
- not use VMware VSA, why? because it's a "artificially limited toy", I don't need to say nothing more, its just a very limited and restricted toy, not a storage, probably better is to use debian with some iSCSI target (maybe not IET) but with some more table
so what to use?? hmm, as other mates it's not easy to find a good, stable and fast cheap storage for virtualization...
but options like these may be consider:
- Open-E DSS - stable interesting not too expensive storage with good support
- DataCore SanSymphony - Enterprise class sw iSCSI and FC Target for windows but VERY expensive
- StorMagic SvSAN - interesting not expensive but also not very fast storage (like most VSA) but if used with hw raid controller passed on VMDP to VSA may be a fast enought (if this config is supported, I'm not sure of that)
- Nimble Storage (hw) - in my opinion really interesting hw solution based on SSD (middle class) that is really fast (for example good for VDI) and stable as I know
- if you have 4-6k usd more you may buy stable storage but not so fast but invest in a inteligent cache card like OCZ VXL (or LXL for Linux) that dramatically speedup storage performance
- SOHO/SMB hw solutions like Drobo based on BeyondRAID (its a intelligent mirror of raid5 so raid 51), it rather will not be fast and stable but it have some support and features that may be consider with cache card
- SOHO like QNAP, Synology, and maybe something other cheap hw, it will be slow and may be unstable during heavy or middle load but when used with some cache card may be
- SMB NImbusData solutions - its not very expensive (as I know) but its for middle market and will offer stable not so fast solution, used with cache card it will be ideal for middle or even heavy load
- and here we have some unsupported solutions based on Linux... hmm, that solutions are in most cased unsupported or with limited support (like RedHat or SuSe) but may offer really fast and stable (with exceptions of course) solutions, also you can use a cche cards in most linux like OCZ LXL
- and in some case supported based on Oracle (Sun) Solaris - its enterprise class system and a really complicated system for storage with best zfs solution on market, of course its managed from cli and need a really expensive hw to be supported
- Aberdeen AberNAS - I don't know this solutions but its hw solution based on Enterprise Linux so it's a box with linux with limited support on middle range hw, looks ok for cheap stable fast solution (I think)
- personally for my lab and data I use a Debian with Linux SCSI Target serving me iSCSI and FC Targets at once and working fast and stable (on Areca card VMDP to VSA with that Debian) but it's totally unsupported solution so consider it rather only for labs
p.s. - I don't talk about HA/FT of arrays/controllers, I don't have time today but for production environment you HAVE TO ALWAYS use solution with 2 controlers or synchronous mirror, totally eventually with fast async mirror (if you din't have any db on VI but its rather impossible) but you have to consider lost of some data during primary storage crash what in 90% means restore from backup...:(
mates I hope I add some ideas to general looking for a cheap storage solution for our virtual infrastructure and for general world knowledge about storages for virtualization.
kind regards
NTShad0w