top of page

Homelab (Sysadmin & Networking)

Screenshot 2021-12-07 145119.png

I was always interested in personal upskilling and learning new technologies. So I decided to create my own home lab outside of work hours. Through this homelab, I improved on my existing skills such as Linux and learnt new technologies along the way such as Docker, Containerisation, Virtualisation (Hyper-V), Domain Controller and Active Directory, Failover nodes, Self Hosted Services, and more. 

In this section, I will be demonstrating the configuration of my home lab and what I was able to achieve through this.

Networking Side

As part of my homelab, I invested in UniFi equipment. Not only does this allow me to dive more into networking outside of my work environment, but it also allows me to test out ideas at home, before deploying it at work, which reduces production downtime if things were to go wrong.

My homelab networking configuration is shown below:

Use Cases

  • Unifi USG Pro 3: This is a router and a security gateway built all in one. It's compact and fanless which is perfect for my lab since space is an issue for me.

  • Unifi USW Flex Switch Mini: This is a 4 port switch that I can use to route traffic, and since it's managed, I can tinker with VLANs and more advanced features.

  • Raspberry Pi 3B+: This is used as my local DNS resolver and VPN server (more on this below)

  • UniFi Cloud Key: This is used as a controller for my UniFi equipment. It also allows me to manage my home network remotely.

  • UniFi POE Injector: Since my switch does not support POE, I need to buy a POE Injector to power my access points. (More on this below)

  • UniFi UAP AC PRO: This serves as my access point for my house.

image 1.jpg
image 2.jpg
image 4.jpg

Virtualisation Side

As part of my homelab, I invested in multiple types of equipment to host my virtualisation software. Through this, I was able to self-host services that people living in my home can take advantage of and also explore new technologies with the addition of self-learning.

My virtualisation configuration is shown below.

Use Cases

I had the intention of keeping my home lab running 24/7, so power consumption was a big factor when purchasing equipment for my homelab. Since my use cases for this lab did not require that much processing power and I did not need the expandability that was offered in larger-sized computers, I opted for a small factor PC which I felt fit my home space perfectly.

  • Acer Veriton N4630g: This is a tiny PC (pictured below) which comes with an i5-4460T quad-core processor. I upgraded the ram from 8GB to 16GB with spare parts I had lying around, and a 128GB Sandisk SSD (SSD is where I will install Proxmox on).

  • Western Digital Black 2.5" 500GB HDD: This is the drive I will use to store my virtual machines and containers. 

  • Connectivity: This computer is hard wired to my switch.

Proxmox - Reason for choosingThere are many hyper-v platforms out there, most popular being Vmware ESXI and Proxmox. It is also common for users to install windows server bare metal, and install the hyper-v feature. My reasoning for choosing Proxmox is that since it's based on Debian/Linux. I would feel familiar with navigating around the terminal and troubleshooting. I also enjoy the user interface and how it can be run on low-end hardware (even on an Intel Atom!). Where Vmware ESXI is a more mature platform, meant for production use, unfortunately, my hardware does not meet their minimum requirements. 

Reason for choosing baremetal installation rather than a virtual machine: This may confuse some of you. A baremetal installation is where you install a virtualisation software on the computer directly, turning it into a virtual machine server. Softwares such as Proxmox and VMware ESXI are installed bare metal. It utilises the entire disk and there is no option to dual boot with other operating systems (As far as I know).


A normal virtual machine installation such as VMware Workstation or Oracle Virtualbox is installed on top of an already existing operating system, therefore resources such as processor, ram, and storage are shared.

I chose to utilise a bare metal hyper-v installation because it allows me to fully utilise the hardware that I have available, without having a full-blown operating system such as windows consume a chunk of it. I also enjoy that in bare metal, I access the server through the browser from an IP address, rather than through a remote desktop in the latter installation. 


In this section, I will be demonstrating the actions I took to achieve the desired outcomes of my homelab projects.

Networking and Routing: The first thing I did was create a network where my homelab services will reside in. This ensures that I don't interfere with other devices on my network while experimenting. I then connect my hyper v server (on port number 4 on my switch) and route my new network through this port profile.

Networking Setup
Routing Network
Screenshot 2021-12-07 175659.png
Screenshot 2021-12-07 180531.png
Screenshot 2021-12-07 182330.png
Proxmox - Volumes

Proxmox Setup - Volumes: As stated in my review on my Acer mini PC, I have two drives, 1 purely dedicated for the Proxmox operating system and another 500GB HDD dedicated for virtual machine storage. Here's how I did this logically.

So lets break this down. hdd-img is a mount point to my 500GB HDD drive. All my VM's and containers are stored via this mount point. local is where I can store my ISO files used for VM's or container templates, all locally and within the machine. local-lvm is a virtual volume which I have not set up yet. In the future, I can expand this volume and utilise it for additional projects. nfsstorage is a mount point to my OpenMediaVault storage solution. This is mainly a mount point for my plex server but will also be used for Proxmox backups when I plan on upgrading the drive to a larger capacity.

Screenshot 2021-12-07 193156.png
Screenshot 2021-12-07 194524.png
image 6.jpg
DNS Server

Raspberry Pi Services: Pi-Hole/VPNI use a raspberry pi to act as my primary DNS server (last line) for my entire network through Pi-Hole, a popular DNS-based ad-blocking service which also has DNS caching functionality to improve website loading times. I also utilise PiVPN with duckdns (a free dynamic DNS service), which uses OpenVPN to allow access to my home network securely from anywhere. 

Unbound: In an attempt to improve the privacy of my network, I set up my own recursive DNS server in addition to Pihole. Unbound is my own DNS server that will go out and "find" the IP address of any domain I am looking for. By not requesting Google and querying it myself, I leave less of a trace of my data for others to exploit. To simplify things, your browsing history can, at least partially, be known by whatever company controls the external DNS server you're using, since they can, in theory, log every single external DNS request your machines make. Unbound takes that middleman out of the equation, converting Pi-Hole itself into one of those servers (but only for requests inside of your network, as far as I know), by directly talking to the core root DNS servers, and storing the results.

              Traditional PiHole adblocking installation                                                    


PiHole adblocking installation with Unbound


Image retrieved from here

Screenshot 2021-12-07 205042.png

Addressing Raspberry Pi's weak points: Since my raspberry pi will have all my network DNS traffic routed to it, I need to ensure that it is reliable. Even though I did setup a high availability (HA) cluster (more on this below), I would like to increase the reliability of this device.

Log2Ram: This script prevents constantly writing background logs produced by Raspbian and other services to the SD card, and to RAM. Which increases the life of the SD card. A cron job is set to transfer all those logs from RAM to SD card at the end of each day.

Screenshot 2021-12-10 230807.png

RemoteIOTSince we have addressed previously that the SD card is one of the weak points of a raspberry pi, it would be beneficial to monitor the SD card's health (even though log2ram increases SD card lifespan, there is a chance that it will still fail over time). RemoteIOT is a monitoring service that can be installed on your raspberry pi that connects to a web interface that gives powerful information about the device. There are many features such as creating cloud alerts based on an action, but I'm only using this to monitor SD card health.

Screenshot 2021-12-10 225319.png
Screenshot 2021-12-07 201513.png
Proxmox - LXC Containers

Proxmox - My Virtual Machines/Containers: In this section, I will provide information regarding the services that I have running on my hyper-v server. A list of services that I am currently running is shown below.

103 (Pi-Hole-Backup): This lxc container is run on Ubuntu Server and it acts as my secondary DNS server just in case my raspberry pi disconnects. This ensures that if it ever happens, my internet connection will not be interrupted. This is also called an HA cluster (high availability cluster). Additionally, I use a service called gravitysync to ensure that changes that I make on the master node will also be replicated on the slave node. I created a cron job that will check every 1 hour to see if there are any new changes in the master node and if there are, update these changes. Setting a custom DNS in my network can be seen here:

Screenshot 2021-12-07 211724.png

105 (Docker): This lxc container is run on a lightweight version of Debian 10 called Turnkey Core. I use this container to learn more about containerisation and self hosted services. I specifically utilise Docker/Docker compose through (a GUI based docker management service). Through this, I have implemented self hosted services such as Heimdall which is an application manager, Uptime Kuma which is a network monitoring tool, speed test tracker and bookstack (platform for organising and storing information). I have also implemented a macvlan to each docker container (network driver to assign a MAC address to each container’s virtual network interface)

Plex Media Server

107 (Plex): This lxc container is run on an ubuntu server. I use this container to host my Plex server to view old copied DVD files and learn about bind mounting local NFS shares (OpenMediaVault). I also learn't how to create port forwarding rules to allow access to my Plex server from anywhere without a VPN.


100 (DC01): This virtual machine runs Windows Server 2019 and serves as my primary domain controller and active directory. I have also implemented a DHCP server, DNS server, Forward and Reverse Lookup Zones, Group Policy Objects and Organisational Units (OU), User account creation, Security Groups and Network Drive Sharing. In addition to this, I have set up my own VLAN, where my Domain controller and client computers will reside in.

The group policies that I have demonstrated in this video is:

  • Hide all installed programs from control panel

  • Map network drive for users

  • Prevent changing user wallpaper

Security Group.png
Windows 10

101 (Client01) & 102 (Client02): Both these virtual machines run Windows 10 and serve as clients to my domain controller. I use these machines to learn how group policies are applied, how to join a domain as well as how my domain controller serves DHCP leases to these machines. 

client computers.png

104 (DC02-WSUS): This virtual machine runs Windows Server 2019 and serves as my secondary domain controller and is in charge of Windows Server Update Services which allows me to manage and distribute updates through a management console.

Virtual NAS - OMV

106 (OpenMediaVault): This virtual machine runs on Ubuntu Server and it hosts my OpenMediaVault service which is a NAS solution. I have passed through an external hard drive (formatted using EXT4), mounted using UUID. This will host my SMB share for my family to store their files locally and a NFS share for my Plex server and backups for Proxmox in the future. I also have the option to provide access permissions (this was not shown in the video as my credentials were already stored in Windows.

108 (DC03-FAILOVER): This virtual machine runs Windows Server 2019 acting as a failover node for my primary domain controller. This domain controller replicates my forest and retains all settings using replication. 

Windows Exchange Server

110 (EXCH01): This virtual machine runs Windows Server 2019 acting as a Windows Exchange Server for my primary domain controller. I use this VM to learn how to deploy my own Exchange Server on-premise, connect it to an existing domain and mailbox/mailflow administration.


A document showing how I installed Windows Exchange Server using Powershell can be viewed here

In the video, I have demonstrated:

- Creating a mailbox for an existing user in the domain

- Creating a mailbox for a new user (replicating a new hire) in the domain

- Creating a distribution group and adding members to it

Screenshot 2022-01-05 150614.png
Screenshot 2022-01-05 151259.png

^ This mail flow rule rejects a scam email from being received and provides an explanation to the end user.

^ This mail flow rule attaches "EXT" to the subject header of an external email to warn users that this email was received outside of the organisation.

Mail Flow Rules
Final Thoughts

Final thoughts

What I can improve on moving forward

  • Planning my hardware requirements realistically: Looking back, as I was halfway through my homelab creation, i realised how tight I was on ram. My initial goal was to experiment with docker, containerisation, and active directory, but this blew to greater lengths, with me creating additional VM's for crucial services such as failover and update services, which takes more resources than I had planned. Knowing this, I would have picked a computer that could accept more ram, as my current computer could only take 16GB max.

Screenshot 2021-12-09 232411.png
  • Implementing a redundancy (RAID) setup:  A few months ago, I had a power cut, it only lasted a few minutes but was enough to leave me stumbled for hours. I power on my homelab to be greeted with this: 


Luckily, I did not lose my data, It ended up being a mount point that I created a long time ago that I was no longer using... Came back to bite me! This taught me a lesson about the potential of data corruption during unexpected events. I only have one drive holding all my virtual machines, which usually runs for days on end. This is definitly on my todo list.

Issues that I have encountered along the way

  • Group Policy Objects (GPO) not being applied to clients: This issue stumbled me for WEEKS. I would have a working domain controller, with clients joined to my domain controller successfully but group policy objects won't take affect. After my browsing through spiceworks and even created an account and made my own post on it describing my issue. I narrowed the issue down to virtual network driver not routing traffic properly. I was using paravirtualised network driver (Recommended driver for Windows based OS) but when i switched to Intel E1000 (old but guaranteed to work), GPO started working!

  • Plex videos froze: This was also a confusing one. I eventually tracked it down to an incompatible external hard drive controller. When accessing small amounts of files, it would be fine but when doing an action that requires constant reading from disk, the disk would eventually halt. I used another spare external hard drive controller I had in stock, remounted the drive and was working fine ever since.

  • Group Policy Objects not updating: This issue was an odd one. After applying my new group policy. It would not take affect immediately. After stumbling across a command that speeds up the policy refresh process on the clients computer


      gpupdate /force

      and rebooting the PC, group policy was taking effect!

  • Windows Exchange Server not sending emails (Emails stuck in draft):


This issue also confused me, but I quickly realised that it was due to DNS. Windows Server by default uses a loop back address ( as its DNS setting. I manually changed this to the IP of the server, with google ( as the secondary. After a reboot, emails now send!

What I have learnt

  • Linux: I learnt a lot about how linux works and how to navigate around the operating system and perform basic operations.

  • Containerisation: I learnt how to deploy small docker containers using docker compose to self host a service and deploy an lxc container to save resources compared to a VM.

  • Networking: I learn't how to expose a port publically to access a service and how to create a robust VPN server using affordable hardware as well as hosting my own DNS server locally to block ads with an addition to implementing a recursive DNS server to increase privacy. I have also learnt how to create a network and VLAN to route all my self hosted services to. 

  • Windows Active Directory/Domain Controller: I also learnt the basics of Windows Server 2019, active directory, failover servers and domain controllers and the services they have within.

  • Virtualisation: I have also learnt the basics of how virtualisation and hyper-v works, the importance of redundancy (with inclusion of HA clusters) and how to deploy multiple virtual machines, and ensuring these virtual machines can communicate with other devices on my network if needed.

  • Windows Exchange Server: I learnt the basics of administering Windows Server

This homelab project has certainly been a journey for me. I started off with small ideas but ended up with a configuration much more complicated than I anticipated. Im grateful for the opportunities I was given to be able to create a homelab like this. As it provides a safe environment for me to break things, learn and experiment to eventually use these new skills in the workplace. I do plan on exploring Linux and containerisation/virtualisation as it is my area of interest. I hope this small post provides some value to you, and maybe gives you some insight on a basic homelab setup or motivation to start your own. 

- Ghazi

bottom of page