5 Useful SendGrid Tips

I recently started working on a concept of a message list subscription service, to be integrated directly into new product features so that people can add themselves to the list to receive information of updates on specific features. This led me down the path of using Azure Storage and SendGrid, building 2 raw prototype apps (in WinForms at this early stage) to achieve both the subscription and the sending out of data to the subscribers. Writing this on.NetFramework 4.8 with C# in a fairly native way has given me a great way to investigate and utilise the SendGrid platform to send emails to multiple subscribers.

I wanted to share my findings, rather than my code at this stage, and here are my Top 5 SendGrid tips:

  1. Do review Microsoft’s guide on how to get started with SendGrid via Azure, this simple article gives you a great base on how to set up the back-end and start writing C# around it: https://docs.microsoft.com/en-us/azure/sendgrid-dotnet-how-to-send-email
  2. Use Unsubscribe groups – these allow you to stay within GDPR in Europe, appending an Unsubscribe and Manage Preferences link to the bottom of the emails that are sent. i.e.
Unsubscribe preferences from SendGrid
  1. Write both plain text and HTML content for your email, I ended up creating an HTML builder so I could convert Email addresses to <a href="mailto:EmailAddress"> and any HTTP(s):// links to <a href:"http://link"> this was a great experiment with regular expressions. Note, I found that not all Email clients auto-convert, e.g. Outlook will convert an Email address but not a URL. I was creating my body text in a Rich Text Box, so it was just one big string!
  2. Look into Personalisations (https://sendgrid.com/docs/for-developers/sending-email/personalizations/) these are crucial if you want to customise your output (the emails themselves). I created variables to use in subject and body lines so I could pass in {customer} and {product}. SendGrid has a substitution capability which comes as part of personalisation. I’ll share a snippet of code here, as I found a solution on StackOverflow (https://stackoverflow.com/a/53292550) but it needed a slight tweak, changing the Tos.Add to a msg.AddTo as the To: Email address needs to be a part of the personalisation, I wasn’t sure if this was to do with API changes or not! Another tip, if you want the subject to be the same, without substitutions you can use the SetGlobalSubject method
var personalizationIndex = 0;
                foreach (var subscriber in subscriberEntities)
                {
                    msg.AddTo(new EmailAddress(subscriber.RowKey, subscriber.Name), personalizationIndex);
                    msg.AddSubstitution("{product}", product.Description, personalizationIndex);
                    msg.AddSubstitution("{customer}", subscriber.Name, personalizationIndex);
                    personalizationIndex++;
                    
                }

  1. Create a test email method, that uses the same logic as your core one, but only sends to one email address. You don’t want to be sending bulk emails out without checking them first!

I hope this new style of thread has been an interesting read, I hope to do more very soon!

Q1 FY20 Part 2

Welcome to Part 2 of the end of year summary on the career side of things. 2019 ended on a high at work, and the previous post (here) started looking at the new job role and my initial project of migrating systems between domains. This project was effectively a merge of all my previous skills and a way to develop my new skillset required to take the role forwards into 2020 and beyond.

It was mentioned in part one that the infrastructure project effectively became a DevOps implementation project, and I’ll try to delve into the bits I can discuss in this post. Firstly there’s the inheritance; over 300 environments, many with specific mods for specific customers, and the remaining with what we call “Extended Solutions” – productized mods effectively. Then there was all the code itself, fortunately, the existing dev team has a fantastic grasp of the deep dark secrets of Git and I have enough of a basic understanding to pick up where others left off, but then, something new to myself came out, and that was the builds of the code. Debug or Release, MSBuild versions, semantic versioning, Git Flow… you get the idea. All very complicated to me at the time, but now it’s in my veins!

Previously the team used an old, unsupported, broken version of Jenkins to build their code, with definitions for each Git Branch, some with bat files, some hardcoded in the Jenkins config screen, basically a mess, and different rules for different people. Well, I like to standardize, so we scrapped the old, and brought in the new. Cue the amazing concept that is Continuous Integration. Having some basic experience with this in Azure DevOps (remember this post?) the concept was not new, however, implementing with Jenkins was a new experience. I managed to inherit a new blank Jenkins server, but first thing I did was reverse proxy via IIS and get it secured with an internal SSL certificate, as well as connect up to the Corporate Active Directory and restrict to our team only, then we got a service account from Corp IT and locked the server down, only myself and Domain Admins can get into the back end, and now we have a secured build server. Why so secure when it’s all internal? Well, that’s because the CTO office allowed us to have the corporate digital cert for code signing so long as it only existed in one, locked-down place. We can call it via the Jenkins application side but not extract, manipulate or otherwise interact with it. With this new updated (and updateable) Jenkins server, plus a couple of useful plugins (Blue Ocean is a must) we have a fantastic platform to manage and analyze our build process. The main feature we are utilizing is the Multibranch Pipelines via Jenkinsfile. This Jenkinsfile is written in groovy and is basically a set of instructions that define a build, for example, we can say build this solution, sign using that certificate and publish the artifacts so we can download afterward. The huge advantage for us is because we build a solution for multiple ERP versions, we can have up to 8 exes output at the end, and we now have one screen to grab them from, regardless of what Git branch we built for. On the subject of Git branches, due to the multibranch pipeline functionality, once our Jenkinsfile is pulled into Master, it will then filter down to all subsequent branches, and with this feature enabled, Jenkins will detect any new branches that were pushed back to the origin with that Jenkinsfile includes. I’ve previously written about VS Code and all the wondrous things it does, but we also discovered a Jenkinsfile checker in the form of https://marketplace.visualstudio.com/items?itemName=janjoerke.jenkins-pipeline-linter-connector. This tool allows us to check syntax, against our own Jenkins server and therefore ensures an accurate rule definition every time we adjust a build. It’s time-savers like these that have boosted the team’s productivity significantly, I recently tweeted about this improvement, as I took hold of an existing codebase and fully integrated it into our new philosophy within an hour or so!

A large part of the battle has been documenting the configuration and the overall process as well as educating colleagues (primarily developers) about how we are using these concepts and tools. I’ve found the majority of developers know the concepts and will have their own experiences with Git and CI/CD but until you document what the process should be in the exact circumstances, they don’t necessarily see the advantages or understand just how powerful these changes are, and more importantly how it improves consistency and productivity across the team. The improvements are already showing for us, and I fully expect that to continue!

Some reference material for a few of the things discussed in the post above:

Q1 FY20 Part 1

Back on October 1st 2019 I decided to take a leap of faith and join the dark side, this has resulted in me becoming the UK’s first dedicated QA for our Custom development teams. Effectively we develop, using our own SDK and lots of other clever tools, the things that customers would love to have, but which do not come out of the box. Having visited many customer sites in the last couple of years, I have nothing but appreciation for the quality and depth of work this team produces, and it’s absolutely my pleasure to be a part of it going forwards.

My first task, set on Day 1 was to own the migration of systems from one domain to the other. Those who have followed any of my previous posts in the last 4 years will be aware I went from small local company to global ERP vendor overnight (June 1st 2016) by way of an acquisition. Well imagine moving that small dev team’s environments into a very well protected and governed American corporate ecosystem, it was effectively sat on for 3 years, and corporate policies dictated we migrated and shutdown the old!

Deciding where to start was easy… Spend a week or so working on testing out a couple of theories, having done domain migrations previously, and work with internal IT teams to put in the relevant requests and procedures to ensure those theories are robust, scalable and secure. Three weeks in and hours had been wasted scripting out a copy and paste scenario, basically a load of PowerShell scripts to do Find/Replace style blitz across 1000s of files, 10 different ERP versions, 200+ development environments (with Databases). Only the one slight snag, even after reworking permissions and roping IT into a 3TB file copy across 2 unconnected domains…. Internally developed environmnent management tooling, which with all its bells and whistles, was not supportive of the new domain, and had hardcoded ties to the older domain’s file server, oops.

Rethink time… Plan B – the best of the lot. Copying databases is one of those things I literally wrote the manual on for Epicor ERP, so that’s easy; building Windows servers has been the last 10 years of my life, so again, sorted; that leaves my understanding of the tooling that sits in the middle, well, fortunately, my new desk backs on to the lovely chap who wrote that tool, even though he now runs our R&D division, so with a few conversations and about 8 lines of code he rebuilt it for me to work on the new environments, allowing me to fully document it as it got deployed and hey presto, a working blank set of servers ready for migrated data was born within a week; including the ability to build any version of ERP 10, using blank, demo or customer data – depending on whether it’s development or QA work, and the ability to use all the latest features and more importantly the latest development tools, by way of Chocolatey!

The next few weeks consisted of identifying what needed to be moved, and what we could spin up later on demand, the resulting list was around 120 required environments, mostly because of productised “Extended Solutions” which need to be built for each version of ERP 10 we support. But also ongoing customer projects, version uplifts, test environments for developers to test their own theories and boost their skills etc. This was a very slow and involved process, per Environment/DB it was not too bad, but in Part 2 (when I write it) I’ll go through how my domain migration project became an environment and process improvement project, featuring Git, Jenkins, CI/CD and

The good news is my Domain Migration which we scheduled to be fully complete, i.e old domain shut down for 24th December 2019, was in fact completed on 6th December 2019, so despite the slightly wasted 3 weeks of testing, scripting, and familiarisation, with all parties on board we (sadly) shutdown the Dot Net IT domain at 17:30 that evening!

This and That

Over the last few month I’ve tried to expand my horizons a little bit. Since 2009 I have worked in a few different technical roles, from helping to run data centres, and setup environments for ISV engagements at IBM, to running all systems for a rapidly growing Oracle partner, whilst on the side managing 100 websites including e-commerce sites. That led into my quick stint doing tech support in the Automotive sector before moving into customer facing roles in Jan 2016. Since then I’ve always been running on a few different threads, these have been, loosely:

  • Installs/Config for ERP systems including initial system design
  • Technical training of customers in those ERP systems
  • Technical management of escalated issues (across the world)
  • Cross-team liason for high profile or highly escalated customers
  • Coordination of international team of installations consultants
  • Development of internal tooling for installs/ technical consulting
  • Management of environments for wider team

From my recent posts it’s obvious which areas on that list have received the most focus over the last few months, notably the last two, which is where all the DevOps/Code posts are centred around. The reason so much focus has been on this, and I’ll add at this point a lot of it out of work hours, is because it’s something I enjoy, something I’ve been on the edge of before, and an area of technology that I personally believe we should all be at least aware of, and able to understand the basic principles of.

DevOps was a term coined many years before it became mainstream. Mike Loukides wrote a 20 page book called “What is DevOps” back in June 2012, which is published by the world renowned O’Reilly Media. (http://shop.oreilly.com/product/0636920026822.do) That’s some time before I came across the term, although it seems I was already aware of some of the practices that now come under that umbrella. Back then I was managing E-Commerce sites, writing PHP websites against MySQL databases and moving a very static, cumbersome “tin-factory” infrastructure over to more dynamic, sustainable growth-capable platform. With a little more time and knowledge at that time I would’ve potentially moved in different directions. I am now starting to close that circle a little from the other side.

For me, career development is crucial, I am more than happy to stay with one company, or in one role, but I will always push to make more of myself, learn new things, get involved with everything possible and break down any and all barriers. I don’t do this to benefit myself, I see it as an opprtunity for me to be a benefit to those around me, both customers and colleagues.

Outside of DevOps activities over the recent months I’ve also been working on my presentation skills, with opportunities to present to colleagues and customers about various technical topics, including System Adminstration, upcoming product changes, best practices etc. This is in part due to being given more free reign with my current role, while we work out what my future roles may or may not include, and that’s if any change at all! In the background, the day to role keeps me busy, planning installs, speaking to new customers about how to deploy, speaking to existing customers about upgrades or enhancements to their systems, all the fun stuff that keeps money in the bank and roofs over heads!

The next few months may get a little busy, well hopefully they will, and all the good stuff will be posted when the chances arise.

#Code

First things first, #notadeveloper. I cannot stress this enough, I am not trained to write code, neither am I employed to do so. However, I do enjoy writing code, I find a lot of satisfaction in hitting the run button and watching something I wrote come alive. Previously I have posted many tweets and blog entries of my coding adventures over the years. My crowing achievement to date is probably the PHP/MySQL based “Asset Management” system, a glorified inventory list ability to Assign to a person, and add a list of repairs or reinstalls against the items. It automated a part of my job I disliked, and quite frankly that is exactly what I love about code. Almost all of the scripts I have written over the years have had the primary purpose of automating repetitive tasks any sysadmin can do with their eyes closed, mostly this has been silent install scripts and updaters.

Fast forward on a little from my sysadmin days, and to the brave new world (for me) of ERP. My primary day job is planning, coordinating and performing installations of ERP software into all sorts of manufacturing and distribution companies. Some are small, many are large, so the nature of, the deployments can vary slightly. That’s generally the bit I’m good at; sizing and planning the system to meet size and expectations of the end users. What we found over the last 2 years is that whilst deployments vary slightly, there is a bulk of work that is virtually the same every time round, certainly in process if not inputs, however we found that amongst the team; time, accuracy and experience could vary, significantly in some cases. Therefore a colleague of mine, with vastly more years experience in product and process went to the efforts to write an automation tool, a set of PowerShell scripts and XML files used to automate the bulk of the installation process. Roll on a few months and instantly accuracy and time were improving, which in turn was improving everyone’s experience. Gone were the days of random (user) errors and here are the days of productivity and valid errors which have much, much more context!

 

So let’s get techy on this and roll on a little further in time; following a few changes, ownership of the tool is now with me. And with a potentially different future ahead, it may only be a short term thing (it may also be long term!), so with this in mind, I sought help of people who know what they are doing, exceptionally smart developers in this case. After a couple of remote session the following has occurred:

Task 1 – Get the code secured. We can’t have something this crucial to our process hiding on a random VM with no backups.

Solution –  Git based code repository, in this case Visual Studio Team Services (VSTS)

 

Task 2 – Get the additional features into the code, but fully tested before deploying.

Solution – Branch off. Currently running with 2 branches, one for immediate fixes/quick additions, and one for next revision which will do far more than just  installing (Shhhh it’s Top Secret)

 

Task 3 – Get the code tidied up, to some form of best practices etc.

Solution – VSTS Build running PowerShell scripts with Pester and PowerShell Script Analyzer to validate all PowerShell scripts against a set of generally accepted best practice rules.

 

Task 4 – Packaging. No one wants to manually build a zip file, upload it to a SharePoint site and email out a notification for every small fix that goes in

Solution  – NuGet and Chocolatey via a VSTS Package Feed.

 

Since this became my problem, three versions of the tooling have been released, packaging only got tested this week so isn’t the primary deployment method yet, but now we have it as a capability there will be many more versions, but that just wont matter as they will always have whatever is the latest in the master branch!

 

Ok so that all explains my random tweets from evenings and weekends over the last month or so, fortunately I’ve had some incredible guidance from some very skilled and friendly development colleagues. Without those guys, I wouldn’t be anywhere with all this other than a whole load of files and folders on one machine with no backups!

 

I’d also like to give back to the community a little, so I plan to have some scripts that I write for more generic tasks uploaded to a public facing Git at https://github.com/jaward916 further to that I have below added a list of all the bookmarks I’ve been building up, especially the ones around Tasks 3 and 4, which has been the key functionality I’ve explored and implemented in the last week.

 

I stress once again, I am not a developer, please do not laugh at my code, or my very basic explanations of the tools and processes, I am learning for fun, but developing to make everyone’s lives a little easier in my world!

 

Bookmarks for VSTS