My Homelab

In my opinion, if you’re wanting to learn or sit exams, nothing beats practising on physical equipment. Online material is great, but you really do need to get proper hands on to fully understand all of the technology areas you’re interested in studying and certifying for.

My lab has grown somewhat in the past few months. Ever seen the movie Inception? Well that’s what happened to me. An idea which became planted in my head around a year ago has transformed into a fully fledged and racked homelab, with more power than some kit I’ve worked on professionally (I kid you not!) yet energy efficient and compact.

Enough about movies, let’s get to it. My previous posts on a 2 node vSAN cluster and deploying NSX briefly talk about the equipment I’m using, but I’ve never gone into a great amount of detail. This blog has been accelerated somewhat by a post I created on the r/homelab section on reddit gaining some traction leading to Brian from StorageReview asking me to send in some words for his site. Well, that escalated quickly!

Before you go out and start putting things in your cart on Amazon, have a think about what you want to achieve and how much you have to spend. If you’re not careful, you’ll be well into 4 figures and wondering how you got there. Any IT project starts with customer requirements and a homelab should not be any different. If you have no end goal, you may find the goalposts moving as you get toward the end and costs increasing or something not working as intended. My requirements were pretty simple (if expensive), I wanted the following:

  • Minimum of 6 cores per node
  • Minimum 64GB RAM
  • 10Gbit NICs
  • Small, portable and energy efficient
  • Scalable
  • Capable of running vSphere SDDC technologies such as vSAN, NSX, vRO etc…
  • …with enough capacity left over for MS and Cisco labs

This may seem a little overkill, especially for a homelab, however do have my own reasons which I won’t delve into too deeply, but in a nutshell wanted something reasonably powerful which I can transport around easily. There are other options, of course, such as nested solutions where you have one single host and ‘nest’ ESXi VMs onto it, within that nest you build a vCenter environment. There’s also the ever popular Intel NUCs. Everyone is unique, so get what works for you.

Firstly I needed the hosts and I’ve chosen SuperMicro SYS E200 8D servers. They feature a 6-core Intel Xeon CPU, support 128GB ECC RAM or 64GB unregistered RAM, have two 10 Gbit NICs, two 1 Gbit NICs and an IPMI management interface. They are low power and small (think thin client) so tick all the boxes. Within them I can fit an M.2 or NVMe drive as well as a 2.5″ SATA drive, so that’s the vSAN caching & capacity tiers taken care of.

Network wise, I’ve chosen the Ubiquiti UniFi US-16-XG switch. I’ve gone with this switch as I already use the UniFi range at home so it seemed the sensible choice. They provide me with 4x 10 Gbit RJ45 connections and 12x 10Gbit SFP+ connections. They don’t do L3 switching, so if you need that then perhaps look at another vendor. In my case I’m using NSX so I’m trying to keep traffic to going east-west, hence no requirement for L3 switching.

What I will say however, is that although with production all flash vSAN you really need to be using 10 Gbit minimum, for a lab it’s not essential so you could get away with a much, much cheaper 1 Gbit switch.

Storage wise I’m using vSAN, however I would always recommend you have a second device to use as a general storage datastore – think ISOs, something to SVmotion VMs to and so on). Since I already have a QNAP NAS for regular home duties, I popped in a dual 10 Gbit SFP+ card to provide this functionality at 10 Gbit via iSCSI. Prices for hardware vary country to country, so rather than a Bill Of Materials, here’s the shopping list:

  • 3x SuperMicro SYS E200-8D
  • 2x SuperMicro mounting shelves
  • 12x 16GB ECC RDIMM DDR4 2133MHz RAM
  • 3x Samsung 970 EVO 250 GB V-NAND M.2 PCI Express Solid State Drive
  • 3x Crucial CT1000MX500SSD1Z 1000 GB 2.5 Inch 7 mm SATA Solid State Drive
  • 1x UniFi US‑16‑XG 10G 16-Port Managed Aggregation Switch
  • Minimum of 6x Cat6 UTP cables (3x for IPMI, 3x for 10 Gbit NICs)
  • 3x 16GB USB thumb drives (for ESXi)
  • 1x Startech 9U Portable Rack
  • 1x QNAP TVS-671 NAS or alternate mass storage device

This is what it looked like before I racked it all, quite untidy it but did the job.

SuperMicro sell a nice (albeit expensive) rack mount kit for the servers. It is well made and easy to put together. Excuse the mess and the poor picture but you get the idea of how they sit on the rack kit. The external PSU sits at the back firmly in place by some brackets.

Apologies for the picture quality… this is the finished article. I will endeavour to get some better pictures when the light in the room is more favourable.

The rack is 9U, so here’s the breakdown, from the top to bottom:

  • Cable management
  • UniFi switch
  • Patch panel which has RJ45 ports on the back for easy management
  • SuperMicro shelf
  • SuperMicro shelf
  • Empty…. the QNAP is not rack mount, and I certainly don’t want to spend the money on one as the above was not cheap (albeit bought over about a year period)

The floor of the rack has the NAS as well as a small UniFi POE switch. The rack lives upstairs and I wanted POE so I can put a wireless access point on the ceiling, which this switch will eventually power. It also has the IPMI connections for the servers attached to it. I did have a shelf for the bottom most U, however the top of the QNAP NAS protrudes over the bottom most SuperMicro shelf – rather annoying!

I don’t have Visio at home, so my elite MS Paint skills have come into use here for a quick physical network diagram. While the diagram wouldn’t win any awards in any design documentation (that said, I’ve seen worse!) it does the job of explaining how I have it connected.

I may revisit this blog and cover the logical side, but with multiple logical networks within NSX and the lack of Visio, it may prove difficult.

In an ideal world I would have multiple NICs and have some form of redundancy for the port groups and vmkernel interfaces I’m running (Mgmt, vSAN, DRS, iSCSI, Logical Networks etc), but for home use a single 10 Gbit NIC does the job well enough.

Backups are hugely important, even in a lab. Since I quite often break this lab, both deliberately and accidentally, having backups allows me to get up and running quickly again. I have Veeam installed as a VM and it backs up to a CIFS share presented by the QNAP NAS. That way if the vCenter environment dies, or I lose the vSAN datastore, I can recover rather than rebuild from scratch which can become somewhat tedious… note to self, develop a way to automate installing all of this but that’s something for the future. Backup throughput wise I’m seeing between 250 – 300 MB/s, which I would imagine is the maximum write capability of the NAS (4x 6TB WD Red in RAID 5 plus SSD caching).

Up and running here’s my Cluster summary:

While I could remove the memory reservation for the NSX VMs, it’s still quite clear to see that being limited to 32GB of RAM per node could be a hindrance. I didn’t want to spend time powering up and down parts of the lab to do different tasks.

I hope this blog proves useful to someone, please feel free to post any questions or feedback.