Background
At the end of 2015, it was time to replace my old home lab with something new. The previous lab was based upon two HPE ProLiant N54L microservers with local storage running VMware vSphere 5.5 (not listed on this blog). The old home lab didn’t have enough memory and the hosts were getting old (approx. 3 years), it was starting to limit my possibilities.
After a lot of googling and reading reviews and blogs, I came up with the following approach, it is a very safe traditional approach with proven technology. It will be a two-host ESXi cluster with an NFS storage device. The compute is based upon two HPE ProLiant servers running both VMware ESXi 6.0 (currently VMware ESXi 6.5 (the year 2017)). They are managed by a VMware vCenter server running on top of the two hosts. The storage is based upon a Synology DS1515+ with four Samsung SSDs. I have created two volumes for redundancy and the ability to easily enlarge the disk space by just buying two new SSDs instead of three or four. The two volumes are both leveraging RAID 1 technology. This also satisfies vSphere HA because it has two datastores for identifying problems with an ESXi host.
Compute and Storage
ESXi host 1 | ESXi host 2 | Storage | |
Manufacturer: | HPE | HPE | Synology |
Model: | ProLiant ML10 v2 | ProLiant ML10 v2 | DS1618+ |
CPU: | Intel® Core™ i3-4170 Processor | Intel® Core™ i3-4170 Processor | Intel Atom® Processor C3538 |
Memory: | 32GB DDR3 ECC (4 x 8GB UDIMM) | 32GB DDR3 ECC (4 x 8GB UDIMM) | 32GB DDR4 |
Storage: | SATADOM: Delock 54655 16GB | SATADOM: Delock 54655 16GB | 2x Samsung SSD 850 EVO 500 GB 2x Samsung SSD 860 EVO 500 GB 2x Samsung SSD 850 EVO 500 GB |
Storage Controller: | HPE Dynamic Smart Array B120i Controller (enabled in AHCI passthrough) | HPE Dynamic Smart Array B120i Controller (enabled in AHCI passthrough) | Onboard – RAID1 (2x SSD) Onboard – RAID1 (2x SSD) Onboard – RAID1 (2x SSD) |
Network cards: | HPE Ethernet 1Gb 2-port 332i Adapter (onboard) | HPE Ethernet 1Gb 2-port 332i Adapter (onboard) | 4x RJ-45 1 GbE LAN-poort |
Expansion cards: | HPE Ethernet 1Gb 4-port 331T Adapter | HPE Ethernet 1Gb 4-port 331T Adapter | – |
Network
Because this lab environment has multiple hosts with multiple NICs a switch upgrade was required. I have bought an HP 1810-24G v2 Switch (J9803A). A fully managed 24 ports rack switch that supports LACP, VLANs and has no noisy cooling fan.
Power
I case of a power failure or lightning strike the power needs to be protected. I have bought an APC Smart-UPS 750VA USB RM 1U 230V. A small and rack-mountable UPS.
Rack
Since 2008, all current Home Labs were located in a network 19″ rack, but the rack was limited in depth. Therefore, it was time to buy a modern server rack. After some searching on a secondhand website, I found an HP 10622 G2 (22U) server rack. This is the smallest server rack available in the HP G2 rack series. It has enough space and can house full rack servers. It also has many other nice features like wheels, power distribution units, and easily removable bezels.
Product images
Updates
2018
In the year 2018, it was time to make some changes to the lab setup. Currently, the lab environment is still performing as it should so there is no reason to replace the main components. The main reasons for changing the current design and some components were:
- The Synology DS1515+ was experiencing health problems. It appeared to be a known problem related to the Intel Atom C2000 problems. When the problem occurs the Synology NAS dies completely and you’re done.
- I redesigned the storage protocol. I changed the protocol from NFS to iSCSI because of the following reasons:
- VMFS6 is able to automatically reclaim storage (UNMAP) because all my virtual machines are thin-provisioned this reduced the datastore usage by 30%.
- I can have network redundancy for the Synology NAS not only at a single switch but at multiple. This was possible because of the switch upgrade listed below. Two LACP bonds with each two 1 Gigabit NICs.
- I have far more metrics at my disposal for monitoring the iSCSI storage. The advantages of iSCSI are huge when comparing the monitoring metrics to NFS. iSCSI has read & write IOPS, read & write latency, read & write IO size, queue depth, and network latency information. This information is based on SNMP monitoring.
- No additional ESXi installable bundles are required for leveraging VMware VAAI.
- In my opinion: It looks like Synology is pushing the customers to start using the iSCSI storage protocol.
- The HP 1810-24G v2 Switch (J9803A) has been replaced by two NetGear GS724t v4 switches. I just love those NetGear switches… low power usage, layer three capabilities, and lifetime warranty.
- One of the ESXi hosts had two SD card failures in a short time, so that host got an upgrade from SD Card to SATA DOM (a video can be found on my YouTube channel).
I have also added the physical design diagrams to this page to explain the implementation in more detail:
2019
In 2019, I changed a couple of things in my home lab setup. This was mainly because of the lack of CPU power and hardware failures:
- In both machines, I replaced the Intel® Pentium® Processor G3240 CPUs for Intel® Core™ i3-4170 Processor CPUs. The power increase was very noticeable for the virtual machines and the power consumption stayed the same.
- The SD Cards were failing and replaced them with a Delock SATA DOM in both ESXi Hosts. A blog post surrounding the SATA DOM can be found over here.
- My Synology DS1515+ fell victim to the notorious Intel CPU bug (twice) and was replaced by a Synology DS1618+.
- Upgraded VMware vSphere 6.5 to VMware vSphere 6.7.
That wraps up the update for 2019 so far.
2020
It was time to replace the Home Lab with some new hardware. The storage is reused in the next iteration but the hosts are gone. Looking back over the last couple of years and the HPE ML10 v2 were pretty awesome servers with very low power usage and no big hardware failure or instability issues.