Migrating from Azure to Linux
Migration of two web sites from Azure to a linux VM with .NET Core.
2017-04-18
Intro
This is more of a historical reference than a how-to guide. Most of this work was done in January and February, although I’ve since revisited both projects on occasion since then.
I’ll be covering two separate sites, Wetzdoku and WetzNet. The first was very light and the second was fairly complex (what I’d loosely call “medium”). Both sites were migrated for the same reasons: cost and flexibility. My LLC was graduating from BizSpark, so I’d no longer have the free Azure credits. The services I was using would work out to a monthly bill around $60. By moving to .NET Core I could install on any linux VM and would no longer be tied to Windows hosts (which are far fewer and generally more expensive). It would also be a lot easier to take advantage of things like Let’s Encrypt.
Tech Stack Changes
Topic | Previous Stack | Current Stack |
---|---|---|
Tooling | VS 2015 | VS 2017, Rider (experimental; not used for production builds) |
.NET | 4.5.2 framework | 1.0 Core |
Local Web | IIS Express | Kestrel |
Hosting | Azure “standard” web site | linux SSD VM, ngnix proxy, Kestrel |
SQL | Azure SQL Server | Postgresql 9.6 |
Database libs | Dapper, Simple.Data | Dapper, Npgsql |
Server Framework | Nancy v2-clinteastwood | Nancy v2-clinteastwood |
Server View Engine | Spark View Engine | Super Simple View Engine (SSVE) |
Email (Smtp) | System.Net.Mail | MailKit |
Email Templates | Nustache | String replace |
Authentication | Custom + SimpleAuthentication | Custom |
Deployments | VS Publish to Azure (including staging deploy slot) | VS Publish to Folder + git push remote |
Tasks | Azure Web Jobs + Queues | TBD Core CLI |
Blobs | Azure Storage | Azure Storage + Cloudinary + AWS S3 |
Nancy is still in pre-release for v2, but I’ve found it to be very stable. I haven’t benchmarked performance, but mainly because I haven’t personally noticed any lag. Anything that makes a call out of band (e.g. database) utilizes an asynchronous route, while simple views (e.g. an about page) are synchronous. The primary limitations with it on Core during this dev cycle were the lack of options for server-side rendering (only SSVE was Core-compatible at the time). At the time all of the other view engines (which supported with .NET standard) were either abandoned or waiting for Core tooling to settle down.
Wetzdoku previously supported both local and social logins, however the social logins were hardly used. I did spend some time trying to migrate to the new auth Core libs that integrate with OWIN, but moved on due to time constraints. Unless there’s user demand, I probably won’t revisit this anytime soon.
To work on the new server I used the relatively new “Bash on Ubuntu on Windows” to ssh into the VM, setting up a cert so that I wouldn’t have to manually login. I used SFTP via bash to bring over the initial database export, but aside from that I primarily use git to push to a remote repo when I need to deploy a change.
Out-of-band functionality was implemented previously in Azure WebJobs that were fired off as necessary via a queue. WetzNet also used blob storage for image uploads, with some processing done in between (primarily resizing). “Soon” this functionality will be implemented within a Core-based command line app running on the server. The delay was in part to meet a deadline, but also necessary to give the tooling around image manipulation in Core some breathing room to be release-capable. Similarly in the past I toyed with integrating SignalR for real-time push alerts (primarily for admins of WetzNet), but that’s still months (or longer) away from being production ready in Core.
Since the sites are no longer hosted by a cloud overseer, in order to keep the web processes running I’m experimenting with one app utilizing systemd and the other using supervisor (so far I’ve had better luck with systemd, so I’m leaning towards standardizing on it). Both sites have a free SSL cert via Let’s Encrypt that was quick and easy to setup.
View Changes
The only Nancy view engine with support for Core at the start of the year was the baked in Super Simple View Engine (SSVE). As its name implies, it’s quite simple (basically a regex replacer) so anything beyond simple logic required code changes throughout the application. For the smaller site, this was merely a minor inconvenience. On the larger site, it was a major point of contention. So it goes with bleeding edge coding adventures. While I could have pivoted and switched to a client-side framework like React or Vue, given the tight deadline I really didn’t want another huge variable in play.
The good:
- Supports looping through collections, as well as an IF check for whether a collection is not null and has values
- If/notif boolean branch
- It’s fast
The bad/ugly:
- No nested logic within a single template
- If/notif must be booleans (as in: you can’t do an equality check)
- No formatting (numbers, dates, etc)
- No custom code
- Debugging was problematic; often it would render incorrectly and not throw an error, so a fair amount of trial and error was involved.
The lack of custom code basically accounts for all of the other issues, which is hardly surprising. Workarounds primarily required adding many new fields to models (e.g. “HasDescr”, “NameUrlFriendly”, “ModDateFormatted”), while others required utilizing the NancyContext.ViewBag (e.g. for current user or environment info). The lack of nesting logic was extremely painful for WetzNet, as the workaround is to use a partial template for each inception level.
Database Migration
Migrating after a decade+ of SQL Server to Postgresql wasn’t particularly painful experience, but it did require changes to many scripts. I had previously toyed with it, but these projects were my first real world use. Thankfully the use cases were generally very common and easy to resolve. It has excellent documentation, and there are a good number of articles out in the wild for devs coming over from SQL Server.
I initially worked on the table scripts:
- Switched from PascalCase to snake_case, as it seems like the default and/or best practice.
- As IDENTITY is non-standard, switched to Postgresql shorthand SERIAL
- For datetime fields, utilized timestampz while generally defaulting to “now() at time zone ‘utc’”. This obviously also applied to SQL script changes in the applications.
- Switched uniqueidentifier to uuid
- Switched all string fields to text
- For special tables like many-to-many relationships, switched primary key setup to Postgresql inline logic
- Switched bool defaults to true/false (instead of 1⁄0). This obviously also applied to SQL script changes in the applications.
- In my scripts I include appropriate GRANT rights to my app’s database user, including the SEQUENCE if SERIAL is used.
- For WetzNet, I also did some name cleanup for things that I had wanted to change for a while. While not tied to Postgresql, they did impact the migration process so worth mentioning as it’s not an uncommon idea. It primarily impacted code changes, including the initial data import.
Next up were the view scripts. Worth noting that for most changes to the view you had to drop and re-add it; alters are very limited.
- Switched from SQL Server style “=” assignments to “as”. This obviously also applied to SQL script changes in the applications.
- Switched “age” of row to use date_trunc function
- Switched TOP to limit (worth noting, since they’re not placed in the same order within a query). This obviously also applied to SQL script changes in the applications.
At this point I wanted data to verify things looked okay, and so that I’d be able to test the application as I updated its code. For smaller tables, I simply wrote scripts to add the data. For the large ones, instead I wrote simple C# methods to pull from Azure SQL and then save to Postgresql. Most were under a thousand rows, but a couple were in the 10k range and the largest was around 145k; on average it was able to process them at about a 1k/sec rate, so it was relatively quick (albeit not something you’d want if dealing with larger datasets, firing off constantly, etc.).
The process for updating the database code in both apps was fairly straight forward and methodical. However, due to simple differences in scale and to some architectural changes for WetzNet, the pain level for the migration varied pretty dramatically once I went into the application code. Part of the reason was that I switched WetzNet from using a combination of Dapper and Simple.Data, to Dapper-only (as Simple.Data didn’t support Core yet). Rather than utilize similar functionality from something like DapperContrib, I rewrote the logic to use raw SQL queries instead. Again, it was a matter of picking the devil I knew rather than introduce another unknown. Wetzdoku was already entirely Dapper, so aside from many fewer lines of code, it also had that going for a speedier migration.
For my local dev environment, I setup NpgsqlLogManager with parameters enabled; very useful while debugging.
App sql changes:
- Multiple SQL statements require a semi-colon (I’ve been making all of them end with it as a habit)
- When inserting a SERIAL that you need the result, appending “returning id”
- Some statements require an extra word (e.g. “insert into”, “delete from”)
- Replace MERGE with “on conflict” logic
- While System.Transactions is gone from Core, basic transactions are still supported. So, minimal changes for the few cases where I needed to use them.
Little Snags
Some things you can’t account for in advance, aside from simply padding a schedule and hoping for the best. This category could be referred to as “miscellaneous”, and as such might end up being tiny or massive depending on luck! Or skill, depending which side you fall on, I suppose.
I use an email relay for Wetzdoku, and one snag I hit was when I hit a bug in my VM’s console for setting up a DNS text field. Thankfully support from both my relay provider and VM host were quick to respond and helpful (in short: relay = “not us!” vm = “oops!”).
Another issue might have been when I setup the firewall on the server and didn’t leave a port open for SSH. That was a fun moment of “ohhhh no” before working through a reset process to regain normal remote access.
On Azure, I simply used some web config overrides to do 304 redirects. Now they’re actually in my modules, which at first annoyed me but now seems like a better approach (less likely to forget they exist).
For most of the migration process I was using VS 2015 and had a hit-and-miss experience debugging Core. The main “miss” was that I had no local variables when stepping through code. So, similar to my first attempts at debugging in college, I’d have to add console writes as needed. That was less than fun, but I’m happy to report it’s working fine with VS 2017.
My dev machine is old. It still performs fine and has two SSDs (original 256gb for OS and secondary 480gb with programs, code, etc), but it had some old things still installed. Once I finished migrating those two apps to Postgresql, I made sure I had backups of everything from SQL Server and then went through and removed (in some form) SQL 2008, 2012, 2014, and 2016. Then I did a clean install of SQL 2016 express. I similarly removed all my old Azure libs/frameworks, old ASP.NET frameworks, and so on. I ended up saving around 10gb of space, which was nice.
Results
I’m using a nearby datacenter for my hosting, and to be honest the performance so far has been so good that I can’t tell the difference between it and my local machine. This was not remotely the case for Azure, especially on the web side. It’s cost is 1⁄10 of what I was using on Azure.
Since both sites are (unfortunately) rather low volume, they’re hosted on a rather small VM with only a half GB of memory. The server generally sits around 66% memory, with around 50% used by .NET Core and Postgresql combined. CPU and disk are barely blips on radar. Since I still need to add a command line app for processing out-of-band tasks, I’m considering this server “full”, at least for any dynamic sites.