Quantcast
Channel: VMware Communities: Message List - vSphere™ Storage
Viewing all articles
Browse latest Browse all 5826

Using local disk storage; each guest OS it's own (physical) disk?

$
0
0

Hello,

 

I’m trying to work out how to deploy my first VMware host. At this point I’m stuck at how to organize the storage. I’ve been reading several documents but they confuse more and more. Some pointers to the right direction would be great. I’m talking about a home/hobby environment so I’m definitely not seeking for a high performance solution.

 

Currently I have a 24/7 Linux based server that acts as a file server, backup server, e-mail server and it hosts several websites. For mass storage I have a 4 disk raid5 array with LVM. The idea is that I can expand the disk storage by creating a second raid5 array with LVM and then just add it to the existing one. The Linux OS sits on a 2 disk mirror. Both raids are software based. I choose this to be hardware independence in case I get a failure. The performance is good enough but sometimes during simultaneously disk I/O it becomes a bottleneck. It would be nice if I could improve this.

 

I have a PVR system for tv recording and a workstation that I use for photo/video editing and office tasks etc. Both are Windows 7 machines and have SAMBA access to all the drives in the Linux server, including my laptop via VPN. I also have a mini PC with Windows 7 that runs 24/7 with home automation software.

 

I like to host all 4 systems on the VMware host. The “old” Linux server will be upgraded with a new motherboard with 2 8 core Operton CPU’s and it will be converted to the VMware host. This host can fit 6 SSD’s and 16 SATA drives. 8 SATA drives will be uses for the 2x4raid5 mass storage. The new motherboard comes with a 8-array HW raid interface. I’m considering to use this to improve disk performance but I’ll be in trouble in case of a failure.

 

What would be the best way to do for storage? I thought about keeping the mass storage as it is, and kind of “move” it with the Linux server to the VMware host. (I am planning to do a fresh installation of each guest host). And I thought that each guest host should get its own physical (SSD) drive. But I’m learning this is not the best practice? Should I create one big storage and create virtual drives for each guest OS and the mass storage?

 

The guest OS with the home automation software is basically 99% in idle. The PVR probably somewhere within that range too, though the PVR could create some heavy disk I/O at some points. So I’m thinking that I could perhaps use image files for these OS’s? I’m also planning to have a development guest Linux OS which could run from an image.

 

Where can I best install the hypervision? Does this affect the overall (disk) performance a lot? One thing I like to keep in mind it to create a somewhat quiet and energy friendly server. I prefer not to have 16 SATA disk spinning 24/7 when it is not necessary. So my idea was to use SSD's for the continuously running disks and SATA disks for mass storage and backups. (btw, the backups are also mirrored-ed to an offsite location.)

 

Thanks for any help and info!

 

Robbert


Viewing all articles
Browse latest Browse all 5826

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>