Virtual Home Server

As one of those people who loves to be running the latest tech in both my home and professional lives it’s critical that I build the correct infrastructure in order to achieve that.

At home I recently obtained a 2014 spec Dell server, which had a fair bit of memory and storage, certainly for what was to become the hub of my home operations!

In the last 2 months I have been building the server up, utilising all the latest platforms I can get my hands on.. VMWare ESX 6.5, Windows 10/2016, Ubuntu 16.04 etc.

I now have 8 VMs, across 2 datastores having upgraded all firmware possible and playing around with various settings to balance performance and noise (it’s in the spare room)

Here’s the outcome of that work:

This first image shows my ESX 6.5 HTML5 based landing page (one of easiest to use web admin tools I’ve seen), you’ll note the 128GB RAM, Dual 2.9GHz CPUs and 8.5TB storage – perfect for running media servers as well as testing platforms for my crazy ideas!

Drilling down into the VMs I have built you’ll see a mixture of OSes and things I’m testing:

I was clever enough (somehow) to make my FTP server web facing, it’s where I store all the freebie utility style programs that I use across many systems, It allows me to use it instead of having to carry a USB around all the time!

Plex is the big one, over 3TB assigned to it for all the media we have at home, we can play it across all our devices, such as the SmartTV, Amazon Fire Stick, XboxOne etc.

What I’ve not yet got to grips with is the VMNetwork side of things, eventually I’d like to VLAN off some of the VMs to do some sandbox style testing with various OSes, maybe get back into Linux and re-learn hardening techniques etc, just need the time!

Linux Virus Scan – Daily Email Script

Yes, that’s right, Linux can get viruses too, in fact they can harbour windows based viruses if the system is used as a web and email server! Something I’ve been becoming more and more familiar with. So, our systems already used ClamAV one of the more popular Linux based virus scanners. But whilst it runs a scan on incoming emails etc. It doesn’t really give me a nice visual output (no GUI). My solution? Automated daily scan with emailed results…

clamscan -r /var/www > /root/scanresults.txt

cat /root/scanresults.txt | mail -s ” Scan Results”

cat /root/scanresults.txt | grep FOUND | mail -s “Viruses Found”¬†

So what does this do?

well, it scans the web directories (recursively) for any viruses that are listed in the virus DB (updated twice daily) Рit then puts all the results into a text file. this text file is then read into an email command which is sent to the email address.

However, this isn’t much use as there are thousands of files and directories, what I really want to know is whether viruses were found… the solution to this is GREP out the value “FOUND” which is appended to the file name if a virus is found to be in it – this is then read into the same email command as before leaving me a nice list of only the files found with viruses!

I love a nice quick and easy script – I used cron tab to run this at 00:05 and 12:05 every day!

How to become a web host almost overnight…

I’m thinking the title of this post may one day be a best-selling e-book written by myself. The word “overnight” is a slight exaggeration, however since the second week of August, leading up to the present point in time, I have been working tirelessly (yes even through the night and over a weekend!), to migrate a web design/ hosting company’s complete hosting solution to local premises, from the over-hyped cloud.

Now, this all sounds very exciting, and it is (for me), however the complexity of moving from a hosted server environment (thanks to to a more localised, manageable system is enormous. Imagine turning up to a new job, and on day one the boss says, here’s your predecessor’s system, there’s no documentation on how to do your job just log in and figure out what you’re going to be doing without much help from those around you as they’re all too busy; well that’s a slightly over-hyped version of the situation here. Inheriting a system without documentation is one thing, gradually by pooling all the resources you can find you can map it out into your own terms and understanding – and can gain a good basic overview, then with more time (if available) you learn all the in depth aspects of the system and can sometimes if lucky, rewrite certain elements to your own language and document that – this is in essence reverse engineering, and a good skill to have as a system admin.

The second problem, once system knowledge is gained is, how to go about migrating from a hosted environment to one you host yourself. This sounds easy – in short it’s not! With over 60 customer’s websites and emails hosted, moving this is inevitably going to cause some disruption and downtime, this can be managed with good customer service, however what happens if certain things don’t work properly on the new system, or after a reboot not all services come back, or you don’t know the password for the user that runs a specific task? Yes, I have experienced all these and more. In fact, this whole challenge has stretched my skills and knowledge to the limit, and I’m sure once the new systems are fully live and more customers come on-board, I will look back and be grateful for having this experience.

Migration in this case hasn’t been awful, because, even though the servers are dedicated server environments, the actual hosting servers are virtualised within those, and with a little Linux/ virtualisation magic (and a weekend window of 2 hours), images were cloned and copied to the new servers. Cloning did mean taking all systems down temporarily, but I was impressed with the speed of cloning 100Gb images (under 45mins) – and with 100Mbps Fibre at my end the download of the clone wasn’t too bad either.

So, now I have the clones what do I do with them – well assuming my local servers are configured correctly to host the virtual images I can simply turn them on, change the IP addresses, do my usual Firewall magic (god bless WatchGuard!), and away we go… Could it really be that simple, well… Yes, surprisingly. In my head I thought “turn it on, change the ip addresses, reboot, test” – And wuite simply this was the case. So now I have a virtualised hosting environment, running on much more powerful equipment of which I have full control. Sorted.

Well, almost… What about the websites themselves, I may have the data, and even the emails we host, but what about the pointers to those sites, where do nameservers point, who has control…Bugger! – The first panic/ minor snag was realising we dont manage the domains for all the customers, and after investigation, we only manage around half (usually where they have and .com and we set up one but they did the other!) – OK not to worry, should we just do it in one hit and modify the nameservers, well we could, but I don’t like the risk, but I do like control, so we decided to manually switch one domain at a time, allownig to test each one individually and ensure customer satisfaction is still at the highest levels. – This is a slow process, especially where back-end DBs and emails are still being modified all the time, as I have a new system not being updated whilst the old is (and vice-versa) so a little manual intervention is required to copy down data from the “old” to the “new”, but again this gives me control so it’s all good.

Now, it’s just a case of going through the list of domains (all 132) one by one and modifying nameservers, and informing customers to do the same where they have the control. I like the personal touch that’s possible with this many customers, but dread what we would have to plan if we did this again with 10x the numbers (which is always possible)…


Thanks for reading my longest post ever, for more information on this ongoing project and the tools and skills required please feel free to contact me at:

Nagios Notification Script

Originally posted at (

I decided to write my own script for Nagios to send emails to external addresses when MS Exchange goes down in our organisation (which has been happening quite often lately!)

my file is called by a command created in the nagios command.cfg file as below:
#Exchange notifications
define command{
command_name notify_ex_mail

The “$PARAMETER$” inputs are created by nagios and therefore would not make sense outside of a Nagios config file.

my file is here:
## Send mail notification when nagios detects a problem – manual overide from Nagios defaults ##
## Script By Jonathan Ward 26/09/2011 ##
##Parameter List as defined in /etc/nagios3/commands.cfg
## $1 = Notification Type e.g. “PROBLEM”
## $2 = Service Description e.g. “Explorer.exe” OR “SMTP Status”
## $3 = Host Alias e.g “MyExchangeServer”
## $4 = Host Address e.g. “”
## $5 = Service State e.g. “CRITICAL”
## $6 = Long Date and Time e.g. “Mon Sept 26 16:07:21 BST 2011”
## $7 = Service Output
# #$8 = Contact Email
##Set Message Subject – spaces won’t work?
msgsubject=’Exchange Issue’
##Set Email Addresses with spaces not commas etc.
##Set Message Body
msgbody=”Nagios is reporting $1 on $3 \n \nService $2 State is: $5 \n \nTime Reported: $6″
##Create subject in file /etc/nagios3/mailbody
#echo -e “$msgbody” > /etc/nagios3/mailbody
##Command to send email with subject and body
#mail -s “$msgsubject” “$msgto” < /etc/nagios3/mailbody #Using external file as body
echo -e “‘$msgbody'” | mail -s “$msgsubject” “$msgto” #using internal echo as body – prints -e in emails???
##delete body file for next run
#rm -f /etc/nagios3/mailbody
##Debugging lines go here…
# echo -e “$1 \n$2 \n$3 \n$4 \n$5 \n$6 \n$7 \n$8” > /root/scriptdebug #Copies values of parameters on seperate lines in /root/scriptdebug file
## /etc/nagios3/ “notifcation type” “service description” “host alias” “host address” “service state” “long date time” “service output” “contact email”