Post by
MinisterofDOOM »
https://forums.nicoclub.com/ministerofdoom-u16506.html
Sun Jan 15, 2017 9:50 pm
Nice setup! Sounds like a lot of fun.
I've revised and expanded my setup quite a bit since this post. I can't get my ISP's pair-bonded DSL modem to act in proper bridge mode, meaning the benfits I would be achieving from a pfSense router (my original plan) are nullified. I picked up a Netgear AC1900 for a little better coverage across my tiny house. So that freed up some horsepower on my R810.
I have also obtained a few more (very dated) servers. I've got a Dell R300 (1U) with a desktop mobo running a Core 2 Quad with 4GB RAM, an ancient HP W520 (also 1U) with a quad core Xeon and 4GB RAM, and a Dell 2950 (2U) with a pair of 2.7GHz dual-core Xeons and 16GB RAM.
The 2950 has two 500GB SAS drives alongside four 15kRPM 2TB SAS drives, the latter of which I've deployed in a 6.5TB RAID 5 array that's serving as a network fileshare (with fewer and fewer production Linux boxen in my house, Windows fileshares are a little more straightforward if less fun and technical).
The R180 is currently hosting an evaluation of Windows Server 2016 so I can spend some time learning it before we upgrade our machines from 2012 R2 at work. I'm pleased that MS improved GUI functionality while really changing very little in terms of the general server management experience from 2012 R2.
However, once the evaluation runs out, I'm probably going to massively repurpose this machine. Since it's a 2U chassis, it can hold a proper modern graphics card. I already have 3 gaming-capable machines in my office, and I have a SteamLink I use in the living room to stream from one of those boxes, but I think I'd like to take the R810 and add a GTX 980 or something thereabouts to build a moderate-level machine to provide silent, remote hosting for the SteamLink. This way we can enjoy 4-way multiplayer on separate machines, or allow for couch multiplayer from a dedicated machine, all without any fan/cooling noise in the living room.
The 2950 currently has a direct (non-ESXi) install of Windows 10 Pro for fileshare hosting. Even though this machine has plenty of horsepower to host a few additional VMs, I've dedicated virtually all its available storage to one purpose, so there's little point to virtualizing.
The R300 is pretty underpowered and only 1U, so its uses are limited. Right now it's hosting ESXi and is my "experiment with various distros and see what I can break" machine. Its tiny RAM count means I usually don't leave more than one VM running at once, but at least the ESXi environment makes it very straightforward to deploy, remove, power up, and shut down a variety of machines quickly.
I haven't used the HP for anything yet. I got it with a 250GB 7200RPM SATA HDD, and have tossed an additional 500GB 7200 HDD into it. It's not more horsepower than the R300 but no more RAM, so it'll be a bit restricted in use as well. I may install KODI or another media server on it, and refer that to the network share on the 2950 for storage. I'm also looking at setting up some cameras around the yard, and this machine might be a good one to run the DVR on.
As to your question regarding my choice of ESXi:
Honestly, I chose it because we use it extensively at work, and I wanted to get familiar with it in a safe environment.
I've heard a lot of great things about Linux KVM, though. I'm sure I'll end up playing with it on one of my boxes eventually (HyperV as well).
There are some things I really like about ESXi, though.
Firstly, for home use, the Free license is more than sufficient and has no core or socket count restrictions. It does lack support for some fancy things like VMotion (cluster-wide load balancing and automatic VM migration host-to-host) and VShield (hypervisor-level antivirus covering all hosted VMs) and of course it doesn't come with enterprise-level support. But it has everything you need to host and maintain VMs in the VMware ecosystem.
Secondly, due to its ubiquity, it is very widely supported. OVA/OVF files are readily available for a number of OSes, making deployment even easier than installing an OS on bare metal. These files come preconfigured and basically pre-installed, and just need to be loaded and booted up to run. Obviously this isn't appealing for all circumstances, but it has its benefits. One of my favorites is my ability to deploy a clean-slate install of any OS and then save an OVA. From there, I can re-deploy a fresh, ready-to-run copy of that OS in seconds without needing to install from ISO or build from source. It's really nice for scenarios where I want to see what I can break just for fun, because recovering from even big mistakes is just a matter of a few clicks. I don't use OVA for Windows since it sees frequent large-filesize updates, but most *NIX distros work just fine since their updates are generally lightweight and cumulative anyway.
Thirdly, it's SPECTACULARLY easy to use. You insert the disk (or flash drive), tell it to install, configure your IP for remote access, and that is it. It does everything for you. It includes the ability to mount drives on the client machine as local to the host, which means I can pull down an ISO on my desktop, log into VSphere, and then deploy that local ISO across the network to the machine. It also means you can use network drives for this, which is one of the main uses of my big RAID array. All my commonly-used ISOs and OVAs are on that drive, and I can get to it from any of my PCs, which all have VSphere. Then I just spin up a new VM, mount the ISO across the network, and start installing.
It also makes managing hardware (both virtualized and physical) easy. It includes a virtualized switching system that can handle switching and routing between physical or virtual (or both) NICs on hosted VMs. That would have been the key to making pfSense. You can alter memory, CPU, and storage resources on-the-fly for VMs (though some guest OSs don't handle the changes well without a reboot--but that's not ESX's fault). It's easy to get a view of what your machines are doing and how healthy they are. It even claims that it supports memory hotswapping, though I have never tried it. My work environment is too downtime-sensitive to risk learning the hard way, and at home I've never had a need to replace a memory DIMM on a running host yet.
I find that ESX's remote desktop/virtual KVM is quite nice as well, particularly once VMWare Tools are installed. They provide really clean integration of keyboard and mouse functionality with the guest OS and in a lot of ways I prefer it over even Windows' native RD client.
I'm not sure how it compares to Linux KVM in any of these regards, though, as I've only used ESXi.