Home » Home Lab » Home Lab – 2016 / Current

Home Lab – 2016 / Current

At the end of 2015, it was time to replace my old home lab for something new. The lab was based upon two HPE ProLiant N54L microservers with local storage running VMware vSphere 5.5. The old home lab didn’t have enough memory and the hosts were getting old (approx. 3 years), it was starting to limit my possibilities.

After a lot of googling and reading reviews and blogs, I came up with the following approach, it is a very safe traditional approach with proven technology. It will be a two-host ESXi cluster with an NFS storage device. The compute is based upon two HPE ProLiant servers running both VMware ESXi 6.0 (currently VMware ESXi 6.5 (the year 2017)). They are managed by a VMware vCenter server running on top of the two hosts. The storage is based upon a Synology DS1515+ with four Samsung SSDs. I have created two volumes for redundancy and the ability to easily enlarge the disk space by just buying two new SSDs instead of three or four. The two volumes are both leveraging RAID 1 technology. This also satisfies vSphere HA because it has two datastores for identifying problems with an ESXi host.

Overview – Compute and Storage:

Hypervisor 01Hypervisor 02Storage Device
Manufacturer:HPEHPESynology
Model:ProLiant ML10 v2ProLiant ML10 v2DS1515+
CPU:Intel® Pentium® Processor G3240Intel® Pentium® Processor G3240Intel Atom® Processor C2538
Memory:32GB DDR3 ECC (4 x 8GB UDIMM)32GB DDR3 ECC (4 x 8GB UDIMM)6 GB - DDR3 (2 GB + 4 GB)
Storage:SATADOM: Delock 54655 16GBUSB/SDHC: Kingston 16GB2x Samsung SSD 850 EVO 500 GB
2x Samsung SSD 850 EVO 500 GB
Storage Controller:HPE Dynamic Smart
Array B120i Controller (enabled in AHCI passthrough)
HPE Dynamic Smart
Array B120i Controller (disabled)
Onboard - RAID1 (2x SSD)
Onboard - RAID1 (2x SSD)
Network cards:HPE Ethernet 1Gb 2-port 332i Adapter (onboard) HPE Ethernet 1Gb 2-port 332i Adapter (onboard) 4x RJ-45 1 GbE LAN-poort
Expansion cards:HPE Ethernet 1Gb 4-port 331T AdapterHPE Ethernet 1Gb 4-port 331T Adapter-

Overview – Network:

Because this lab environment has multiple hosts with multiple NICs a switch upgrade was required. I have bought an HP 1810-24G v2 Switch (J9803A). A full managed 24 ports rack switch that supports LACP, VLANs and has no noisy cooling fan.

Overview – Power:

I case of a power failure or lightning strike the power needs to be protected. I have bought an APC Smart-UPS 750VA USB RM 1U 230V. A small and rack-mountable UPS.

Overview – Rack:

Since 2008, all current Home Labs were located in a network 19″ rack, but the rack was limited in depth. Therefore, it was time to buy a modern server rack. After some searching on a secondhand website, I found an HP 10622 G2 (22U) server rack. This is the smallest server rack available in the HP G2 rack series. It has enough of space and can house full rack servers. It also has many other nice features like wheels, power distribution units and easily removable bezels.

Product images:

APC Smart-UPS 750VA USB RM 1U 230V
HP 1810-24G v2 Switch (J9803A)
HP 10622 G2 (22U)
HPE ProLiant ML10 v2
Synology DS1515+

Page Update 2018:

In the year 2018, it was time make some changes to the lab setup. Currently, the lab environment is still performing as it should so there is no reason to replace the main components. The main reasons for changing the current design and some components were:

  • The Synology DS1515+ was experiencing health problems. It appeared to be a known problem related to the Intel Atom C2000 problems. When de problems occur the Synology NAS dies completely and you’re done.
  • I redesigned the storage protocol. I changed the protocol from NFS to iSCSI because of the following reasons:
    • VMFS6 is able to automatically reclaim storage (UNMAP) because all my virtual machines are thin-provisioned this reduced the datastore usage by 30%.
    • I can have network redundancy for the Synology NAS not only at a single switch but at multiple. This was possible because of the switch upgrade listed below. Two LACP bonds with each two 1 Gigabit NICs.
    • I have far more metrics at my disposal for monitoring the iSCSI storage. The advantages of iSCSI are huge when comparing the monitoring metrics to NFS. iSCSI has read & write IOPS, read & write latency, read & write IO size, queue depth and network latency information. This information is based on SNMP monitoring.
    • No additional ESXi installable bundle is required for leveraging VMware VAAI.
    • In my opinion: It looks like Synology is pushing the customers to start using the iSCSI storage protocol.
  • The HP 1810-24G v2 Switch (J9803A) has been replaced by two NetGear GS724t v4 switches. I just love those NetGear switches… low power usage, layer three capabilities and lifetime warranty.
  • One of the ESXi hosts had two SD card failures in a short time, so that host got an upgrade from SD Card to SATADOM (a video can be found on my YouTube channel).

Lab Design 2018:

I have also added the physical design diagrams to this page to explain the implementation.