I was busy upgrading and rebuilding my home lab in the last weeks. Now I want to share with you in detail what it looks like.
This blog post will cover the following topics:
- Hardware Overview
- Network Overview
- Operating System Overview
- installation and configuration
- Future Additions
The basic idea behind the lab is to have a separate basic infrastructure on one host that is capable of managing one or more Hyper-V hosts in addition to one or more ESXi Hosts.
The first system in my lab was built with these components:
- Lian Li PC-D666WRX Big-Tower
- Z97M-D3H, Intel Z97 Mainboard – Socket 1150
- Gigabyte Z97M-D3H, Intel Z97 Mainboard – Socket 1150
- Intel Core i7-4770K 3,5 GHz (Haswell) Socket 1150
- 2x Corsair Vengeance LP Series Black DDR3-1600, CL9 – 16 GB Kit
- Samsung SH-224DB 5,25 Zoll SATA DVD-Burner
- Corsair RM Series RM650 – 650 Watt
- Cooler Master V8 CPU-Cooler – Vapor Chamber
- Western Digital Red, SATA 6G, Intellipower, 3,5 – 2 TB
- InLine 2x Gigabit Network-Card, PCIe x1 incl. Low Profile
- 2x Samsung 850 EVO Series 2,5 Zoll SSD, SATA 6G – 120 GB
The monitor is a Samsung S32D850T 31,5 inch LED-Monitor. He is really huge and the best feature is that you can run two pictures side by side on it. It has a resolution of 2560 x 1440 pixels (WQHD). In my opinion the best monitor I´ve ever had.
The mouse is a Mad Catz R.A.T. 7 Gaming Maus with 6400 dpi. I love it because you can change ergonomic parts to fit perfectly in my hand. I wanted a mouse with a cable because batteries are empty in the most stupid moments.
The keyboard is a SteelSeries 6Gv2 keyboard. I love the feedback on the keys because they feel like to old server keyboards 15 years ago.
Some details about the tower
The Tower supports two systems under one cover. I find this pretty cool because it doesn´t look like garbage in the room like if you use different types of housing. You can insert E-ATX, ATX, Micro-ATX Mainbords on one side and Micro-ATX and Mini-ITX on the other side.
Front removed with a view into the HDD racks.
There are 6 system fans included in the tower and I use three with the first system. Each of the HDD racks supports three 3,5″ drives and two 2,5″ drives. Additionally, you can put two 2,5″ drives on the floor on each side. This is what it looks like from the back:
At the top is room for another 6 optional 120mm fans and there are four USB 3.0 Ports with HD Audio per system available.
Take a look inside. You can see that I have enough room for further expansion of the system.
The system is connected to the Diskstation via two 1 GB Ports.
Synology DiskStation DS2415+
The Diskstation is equipped with the following drives:
- 6x Seagate Desktop HDD, SATA 6G, 7200RPM, 3,5 Zoll – 3 TB
- 2x Samsung 850 PRO Series SSD, SATA 6G – 256 GB
The two SSDs are used for read-and-write caching. The HHDs are configured in RAID 6. The left 4 Slots are used with 4 Seagate Barracuda 2 TB HDDs for source files and documents configured in RAID 5.
This is the back of the Diskstation. Two Ports are used for the load-balanced iSCSI connections to the Hyper-V host. One is connected to the corporate network to the Cisco Meraki box, and the other one is connected to my home network.
All components are also connected to a Cisco Meraki Z1 with a minimum of one Ethernet Port. This way I can access my Lab from the office and vice versa through a private VPN connection. If you haven´t heard about Cisco Meraki Cloud Managed network components you can access more information here.
Operating system overview
The base installation of the Windows Server 2012 R2 Hyper-V Host is done on the two 120 GB Samsung SSDs in a RAID 1 configuration. I was using VMware Fusion before switching to Hyper-V. The main reason why I switched is the memory handling. In VMware Fusion you assign a certain amount of memory e.g. 2048 MB RAM and this is fixed. In Hyper-V you can start with 512 MB base RAM and tell the machine to not use more than x MB RAM. This ends in much more available RAM in my lab. I have 32 GB RAM on my first machine and I can easily run 20 VMs.
Installation and Configuration
It took me some time to figure out how to configure iSCSI with MPIO and the network card settings to get a performance that is acceptable for me.
iSCSI and MPIO
How iSCSI and MPIO should be configured can be found here: https://www.synology.com/de-de/knowledgebase/tutorials/552
The first failure I did was not placing the Synology adapters and the adapters of my Hyper-V hosts in different subnets. This forum post helped me with the solution:http://forum.synology.com/wiki/index.php/How_to_use_the_iSCSI_Target_Service_on_the_Synology_DiskStation
Receive Side Scaling and TCP Offload
Performance was bad after the first tests. After disabling Receive Side Scaling and TCP offload resolved the performance issues. You can find more information on that topic here:https://social.technet.microsoft.com/Forums/en-US/6754b837-f96d-42c4-b22f-292756c83ea4/iscsi-connections-at-startup?forum=winserverfiles.
Issues with XenApp 7.6 and XenDesktop 7.6 VDA installations
Because I was using VMware in my lab before this error message was new to me.
I created all the VMs as Generation 2 machines. This led to the problem that the installation of a VDA failed because Secure Boot is enabled by default in this configuration (more information about Secure Boot can be found here: http://blogs.technet.com/b/jhoward/archive/2013/11/01/hyper-v-generation-2-virtual-machines-part-6.aspx). After disabling Secure Boot for the VDAs via PowerShell installation worked as expected. You can set the configuration via PowerShell:
Set-VMFirmware -vmname $env:computername -EnableSecureBoot Off
For more information about the problem visit also: http://support.citrix.com/article/CTX137731
What about Performance?
I made an Iometer test based on a blog post Jim Moyle wrote on the Atlantis Computing Website. You can easily compare your home-lab performance to mine. You can find the blog post here: https://community.atlantiscomputing.com/blog/Atlantis/August-2013/How-to-use-Iometer-to-Simulate-a-Desktop-Workload.aspx
I think the numbers are not that bad.
Jump Desktop is my preferred app to access remote desktops. I created connections to all lab servers. See for yourself:
I will add one Raspberry Pi 2 as an admin client with Windows 10 and all administrative tools installed on it.
The second half of the Lian Li housing will be used for the following configuration (or similar):
- 4x Corsair Vengeance LPX Series DDR4-2400, CL14 – 32 GB Kit
- Inline 2x Gigabit Network-Card, PCIE x2 Icl. LowProfile
- 2x Samsung 850 EVO Series 2.5 SSD, SATA 6G – 120 GB
- 2x Intel Xeon E5-2808 V3 1,8 GHz (Haswell-EP) Socket 2011-V3 – boxed
- Intel DB32800CW2, Intel C812 Mainboard – DualSocket 2011-V3
- 2x be quiet! Dark Rook Pro 3 CPU-Cooler
Synology DiskStation DS2415+
You might ask yourself why didn´t choose a DX1215 es expansion unit for the DS2415? This is because of the missing ports on the expansion unit and I want to connect the Hyper-V host – like the first system – with load balanced 1 GB Ports. Some disks need to be there as well.
- 4x Western Digital Red Pro, SATA 6G, 7.200, 3,5 – 2 TB
- 2x Samsung 850 PRO Series SSD, SATA 6G, 256 GB
I am also thinking about adding three Intel NUCs in a similar case as Ruben did in his home lab. If you haven´t seen Rubens’s home lab you should take a look. It really looks awesome.
I hope this was interesting for you and you might get ideas for your own home lab.