Q1 FY20 Part 2

Welcome to Part 2 of the end of year summary on the career side of things. 2019 ended on a high at work, and the previous post (here) started looking at the new job role and my initial project of migrating systems between domains. This project was effectively a merge of all my previous skills and a way to develop my new skillset required to take the role forwards into 2020 and beyond.

It was mentioned in part one that the infrastructure project effectively became a DevOps implementation project, and I’ll try to delve into the bits I can discuss in this post. Firstly there’s the inheritance; over 300 environments, many with specific mods for specific customers, and the remaining with what we call “Extended Solutions” – productized mods effectively. Then there was all the code itself, fortunately, the existing dev team has a fantastic grasp of the deep dark secrets of Git and I have enough of a basic understanding to pick up where others left off, but then, something new to myself came out, and that was the builds of the code. Debug or Release, MSBuild versions, semantic versioning, Git Flow… you get the idea. All very complicated to me at the time, but now it’s in my veins!

Previously the team used an old, unsupported, broken version of Jenkins to build their code, with definitions for each Git Branch, some with bat files, some hardcoded in the Jenkins config screen, basically a mess, and different rules for different people. Well, I like to standardize, so we scrapped the old, and brought in the new. Cue the amazing concept that is Continuous Integration. Having some basic experience with this in Azure DevOps (remember this post?) the concept was not new, however, implementing with Jenkins was a new experience. I managed to inherit a new blank Jenkins server, but first thing I did was reverse proxy via IIS and get it secured with an internal SSL certificate, as well as connect up to the Corporate Active Directory and restrict to our team only, then we got a service account from Corp IT and locked the server down, only myself and Domain Admins can get into the back end, and now we have a secured build server. Why so secure when it’s all internal? Well, that’s because the CTO office allowed us to have the corporate digital cert for code signing so long as it only existed in one, locked-down place. We can call it via the Jenkins application side but not extract, manipulate or otherwise interact with it. With this new updated (and updateable) Jenkins server, plus a couple of useful plugins (Blue Ocean is a must) we have a fantastic platform to manage and analyze our build process. The main feature we are utilizing is the Multibranch Pipelines via Jenkinsfile. This Jenkinsfile is written in groovy and is basically a set of instructions that define a build, for example, we can say build this solution, sign using that certificate and publish the artifacts so we can download afterward. The huge advantage for us is because we build a solution for multiple ERP versions, we can have up to 8 exes output at the end, and we now have one screen to grab them from, regardless of what Git branch we built for. On the subject of Git branches, due to the multibranch pipeline functionality, once our Jenkinsfile is pulled into Master, it will then filter down to all subsequent branches, and with this feature enabled, Jenkins will detect any new branches that were pushed back to the origin with that Jenkinsfile includes. I’ve previously written about VS Code and all the wondrous things it does, but we also discovered a Jenkinsfile checker in the form of https://marketplace.visualstudio.com/items?itemName=janjoerke.jenkins-pipeline-linter-connector. This tool allows us to check syntax, against our own Jenkins server and therefore ensures an accurate rule definition every time we adjust a build. It’s time-savers like these that have boosted the team’s productivity significantly, I recently tweeted about this improvement, as I took hold of an existing codebase and fully integrated it into our new philosophy within an hour or so!

A large part of the battle has been documenting the configuration and the overall process as well as educating colleagues (primarily developers) about how we are using these concepts and tools. I’ve found the majority of developers know the concepts and will have their own experiences with Git and CI/CD but until you document what the process should be in the exact circumstances, they don’t necessarily see the advantages or understand just how powerful these changes are, and more importantly how it improves consistency and productivity across the team. The improvements are already showing for us, and I fully expect that to continue!

Some reference material for a few of the things discussed in the post above:

Q1 FY20 Part 1

Back on October 1st 2019 I decided to take a leap of faith and join the dark side, this has resulted in me becoming the UK’s first dedicated QA for our Custom development teams. Effectively we develop, using our own SDK and lots of other clever tools, the things that customers would love to have, but which do not come out of the box. Having visited many customer sites in the last couple of years, I have nothing but appreciation for the quality and depth of work this team produces, and it’s absolutely my pleasure to be a part of it going forwards.

My first task, set on Day 1 was to own the migration of systems from one domain to the other. Those who have followed any of my previous posts in the last 4 years will be aware I went from small local company to global ERP vendor overnight (June 1st 2016) by way of an acquisition. Well imagine moving that small dev team’s environments into a very well protected and governed American corporate ecosystem, it was effectively sat on for 3 years, and corporate policies dictated we migrated and shutdown the old!

Deciding where to start was easy… Spend a week or so working on testing out a couple of theories, having done domain migrations previously, and work with internal IT teams to put in the relevant requests and procedures to ensure those theories are robust, scalable and secure. Three weeks in and hours had been wasted scripting out a copy and paste scenario, basically a load of PowerShell scripts to do Find/Replace style blitz across 1000s of files, 10 different ERP versions, 200+ development environments (with Databases). Only the one slight snag, even after reworking permissions and roping IT into a 3TB file copy across 2 unconnected domains…. Internally developed environmnent management tooling, which with all its bells and whistles, was not supportive of the new domain, and had hardcoded ties to the older domain’s file server, oops.

Rethink time… Plan B – the best of the lot. Copying databases is one of those things I literally wrote the manual on for Epicor ERP, so that’s easy; building Windows servers has been the last 10 years of my life, so again, sorted; that leaves my understanding of the tooling that sits in the middle, well, fortunately, my new desk backs on to the lovely chap who wrote that tool, even though he now runs our R&D division, so with a few conversations and about 8 lines of code he rebuilt it for me to work on the new environments, allowing me to fully document it as it got deployed and hey presto, a working blank set of servers ready for migrated data was born within a week; including the ability to build any version of ERP 10, using blank, demo or customer data – depending on whether it’s development or QA work, and the ability to use all the latest features and more importantly the latest development tools, by way of Chocolatey!

The next few weeks consisted of identifying what needed to be moved, and what we could spin up later on demand, the resulting list was around 120 required environments, mostly because of productised “Extended Solutions” which need to be built for each version of ERP 10 we support. But also ongoing customer projects, version uplifts, test environments for developers to test their own theories and boost their skills etc. This was a very slow and involved process, per Environment/DB it was not too bad, but in Part 2 (when I write it) I’ll go through how my domain migration project became an environment and process improvement project, featuring Git, Jenkins, CI/CD and

The good news is my Domain Migration which we scheduled to be fully complete, i.e old domain shut down for 24th December 2019, was in fact completed on 6th December 2019, so despite the slightly wasted 3 weeks of testing, scripting, and familiarisation, with all parties on board we (sadly) shutdown the Dot Net IT domain at 17:30 that evening!