The Move to DigitalOcean and DNSimple

I recently moved this blog onto DigitalOcean. For years it has been hosted on WordPress.com, and as a guy who writes software for a living, that’s kind of like cheating. I work primarily on the Microsoft stack, and I wanted to dabble in areas unknown.

There were a few hosting options I considered. Just about any cloud platform can run WordPress, but I wanted to own and manage a box running on a Linux distribution. Services that hide their internals, like Heroku, while awesome in their own right, were out of the question for what I wanted. I considered Amazon’s cloud and Microsoft Azure, but found that I could get a lower rate for a single slim server at Linode or DigitalOcean. Linode had a $10/month option for a server with 1GB memory, while DigitalOcean had a $5/month option for a server with 512MB, the same amount of memory as in a Raspberry Pi. I liked the price and I liked the idea of having to work with such tight constraints, so here we are.

DigitalOcean runs all their servers on SSDs, so firing up a new instance and taking complete snapshots is a painless process. You’ve got a variety of OSes and versions to choose from. They also have some pre-made images if you need a quick Gitlab or WordPress instance. I didn’t want to cheat, so I started from scratch and fired up a clean Debian Wheezy instance.

I’ve been intrigued by what I’ve heard about nginx, so I wanted to use that as my server, and I wanted to use Postgresql but found that WordPress doesn’t support any datastore besides MySQL. What I was looking for was summed up as a LEMP server (Linux, nginx, MySQL, and PHP). It’s like a LAMP server, but replaces Apache with nginx. I suppose LNMP was unpronouncable.

Now that it was time to get my hands dirty, I did a little googling, when, lo and behold, Google sent me right back to DigitalOcean. It turns out they have a great documentation portal that clearly explains each step of the very stack I was interested in, and on multiple OSes. I started my journey somewhere around here, How To Install Linux, Nginx, MySQL, PHP (LEMP) Stack on Debian 7.

It was almost too easy. Debian has such a rich set of software in its package repository that I could apt-get just about everything I needed without having to alter my sources list. Even WordPress is included in the stable repository.

nginx

Getting nginx up and running was a breeze. I wanted to toss the WordPress blog under the blog subdomain, but I wanted to keep my SEO links from the top-level domain intact. This quick little nginx config section did the trick:

server {
 listen 80;
 server_name freakingawesome.net;
 return 301 $scheme://blog.freakingawesome.net$request_uri;
}

Every http request for the host now gets a 301 redirect to the blog subdomain. I didn’t have to fire up any actual web application to handle the redirect.

The WordPress setup is a bit longer since it’s running PHP, but overall I was impressed by the brevity of of the server configuration. A few lines give you gzipping based on MIME type:

gzip on;
gzip_types text/css text/x-component application/x-javascript application/javascript text/javascript text/x-js text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon;

And a few more lines can tie in your SSL certificate, given a server declaration that listens on port 443:

ssl on;
ssl_certificate /etc/nginx/ssl/blog.crt;
ssl_certificate_key /etc/nginx/ssl/blog.key; 

There’s an option to validate the configuration before turning it on, and there’s a way to reload the configuration without restarting the service.

The Rest

There wasn’t much to do on the MySQL and PHP side of things besides tweaking a few default settings to make things a little more secure.

When it came to installing WordPress, it really boiled down to a simple:

$ sudo apt-get install wordpress

Huh. That was less than dramatic. I expected a battle. After some configuration tweaks and updates to my nginx configuration, I was greeted with the standard blank WordPress installation. Then it was off to install a few plug-ins, export my data from the old instance, import it to this new blog, and that was it.

DNSimple

Troy Hunt sold me on DNSimple, so I gave him an internet high five and a free month of service when I jumped on board.

It was again a painless process to move over my domain. Their domain management tools even have some shortcuts for a bunch of other popular services. Turning on the Google Apps service added all the CNAMEs and MX records that I had on my old host. This could all be set up before the final domain transfer, and that was I was able to cross over with minimal downtime.

Performance

So far, my little 512MB machine is humming along just fine. I’ve got the W3 Total Cache plug-in enabled to lighten the load a little bit. I’ll eventually tie in Varnish just for the hell of it, but as of right now, my little server is performing quite well for the modest 1k visits this blog gets all day (thanks to all you PRK Recovery googlers out there).

DigitalOcean provides some concise real-time graphs for monitoring bandwidth, disk, and CPU usage, as well as configurable warnings for heavy traffic.

Graphs for Public Bandwidthk, Disk Usage, and CPU Usage

I think I’ll be plenty happy with DigitalOcean as my provider moving forward. I’ve only scratched the surface of what they offer, but I’m off to a good start. Kudos to the team for their easy-to-use tools and especially for the treasure trove of documentation they maintain.

 

Transit of Venus

It turned out to be a mostly clear evening to watch the Transit of Venus. Jen and I went to Holland State Park so we could watch from the beach and I brought my trusty binoculars and cardboard so I could magnify the sun without burning my retinas. It worked pretty well, though during the next transit, 105 years from now, I think I’ll invest in a tripod to mount the binoculars because holding it by hand is a bit too shaky.

Who needs high tech when you have a steady hand?

There were a few pesky clouds for a while but things were mostly cleared up after 7:30 until sunset. The binocular and cardboard trick worked like a charm. The pictures aren’t the best because I only had a crappy camera phone, but there are plenty of better pictures online. These are mine, so deal with it.

That’s a whole freaking planet nearly the size of the Earth in that pinprick of shadow

I just love the fact that there’s a whole planet there in that little piece of shadow, nearly the size of our own Earth. It puts things into a bit of perspective. Carl Sagan put it much better in Pale Blue Dot, and the same thing can apply here. All our existence and hopes and dreams, wars and loves, they all fit into something about that big; something that, while only a relative stone’s throw away, casts a shadow the size of a grain of sand, and only if you look really hard. We’re pretty insignificant here. Let’s make the most of it. But enough soliloquizing.

There was a small crowd of people that gathered around to see my high tech solution, and I got to play science teacher for a little bit. A few were amazed that such a thing could be done with binoculars, and several were expecting something as large as a lunar eclipse and seemed a little disappointed at how small the shadow was. A little girl asked whether the shadow of Venus would appear on top if I flipped the binoculars, so, in the interest of science, we did a little experiment by flipping over the binoculars and found that no, the image stays just as is.

One of the guys I talked to had a welder’s mask that he let me borrow and I was surprised at how well it worked, that you could see the image just fine. Another fellow had a huge telescope with a solar filter hooked up to a laptop. He drew a bigger crowd than I did. Nerd jealousy.

When the clouds were mostly gone I found I could get a much larger, but much dimmer image from a little farther away when I had a bench to steady my hand. I thought this one was cool.

All in all, it was a whole lot of nerdy fun, and there was ice cream to be had, and a pretty sunset. Just before the sun went down, I hazarded some staring at the sun with only my sunglasses to protect me, and was pleased to see Venus’ backside for the last time in my life. The next transit of Venus is in 2117, and if our species hasn’t managed to eradicate ourselves by then, hopefully we’ll have another generation who will appreciate its beauty. The baton has been passed.

Cthulhu on my Kindle

My lovely wife bought me a Kindle as a gift and I’ve been playing with it the first few days. So far, I love this thing.

I’ve been spending a few days hoarding free books that I can find all over the internet. Amazon has a bunch of free books on their site but also recommends other repositories. Project Gutenberg is pretty damn slick. Plus, today I found a totally free collection of HP Lovecraft’s works over at CthulhuChick.com. Well done, Ruth. You rock!

Amazon also has this nifty way of getting books to your Kindle. By registering your device, you have a specific @kindle.com email address assigned (managed on amazon.com) and you can send books as attachments in an email and they’ll show up the next time your e-reader connects to the web. It accepts zip files as well as .mobi files and a few other formats. Plus, they keep a hold of the books you send over email so that if you accidentally delete something, like I already have, you can just pull it up under the Personal Documents section of the Kindle management page and resend it to your device.

I was a bit of a naysayer when these things first came out, but I can definitely see the benefit of having one. Now I just wish I could squeeze my huge-ass hardcover copy of the Autobiography of Mark Twain into digital format without buying it again, because that thing is freaking heavy.

I’m Sold on Dapper

Don’t get me wrong, I love my LINQ. I just have mixed feelings about LINQ2SQL, or anything that promises to make my life easier by allowing me to write fewer SQL statements.

I’ve been burned a few times too many by seemingly innocuous LINQ2SQL queries that ended up ballooning into resource hogs once deployed to the real world. Often, it’s a sleeper; some query that’s been running just fine for years and then, BAM! You get jolted by a spiked CPU like a shovel to the face because an email campaign hit some remote part of the site that hadn’t been pored over. A little digging finds that LINQ2SQL has drunkenly taken over the kegger at your parents’ house, smashing lamps and vases and shoving your friends into the pool, wreaking all sorts of havoc and running your CPU off the charts.

It’s that friend you learn to limit. He may be great in certain situations, like running the basic CRUD (Create, Read, Update, Delete) routines on all those cumbersome admin screens, but once you take him into the real world, once you expose him to all your other friends on your high traffic ecommerce site, once you give him a broader audience, you run the risk that he’ll show his true colors, and you may not like what they see.

Such has become my relationship with any ORM that promises to lift the burden of having to write straight SQL. It’s fine in the right circumstances and saves loads of time writing basic operations. But once you cook up a slightly more complex query and roll it into a public website with tens of thousands of hits an hour, it’s just not enough. Trusting the black box of ORM SQL generation often turns out to be a risky endeavor.

I’d rather be in direct control of what SQL gets executed when writing finely tuned database access. Thus, I’ve come to love what Dapper has to offer. Dapper, by the folks over at stackoverflow.com, is an extremely lightweight data access layer optimized for pulling raw data into your own POCOs (Plain Old C# Objects). It’s that perfect fit between the nauseatingly redundant world of SqlCommands and DataReaders, and the overzealous and overbearing friend you find in LINQ2SQL. No longer do I have to guess at what kind of query an ORM is going to generate. No longer do I have to worry that LINQ2SQL is going to fly off the handle and take up all my CPU trying to compile the same dastardly query over and over again. I can instead write the SQL myself and get it into my POCO of choice with less effort than it takes to bash my head on the keyboard.

For example, let’s say I’ve got this domain object:

public class OmgWtf
{
public string Acronym { get; set; }
public string Sentence { get; set; }
}

All I have to do to yank the data from the database is this:

using (var conn = new SqlConnection(ConnString))
{
conn.Open();

string sql = @"
SELECT TOP 1 omg.Acronym, wtf.Sentence
FROM OnoMatopoeicGiddiness omg
JOIN WordsToFollow wtf ON wtf.OmgID = omg.ID
WHERE wtf.ID = @WtfID";

var omgwtf = conn.Query<OmgWtf>(sql, new { WtfID = 3 }).First();

Console.Write("{0}: {1}", omgwtf.Acronym, omgwtf.Sentence);
}

The result is, of course:

SQL: I Squeal for SQL!

No longer do I have to suffer the fate of black box SQL generation when all I really want is a clean, easy, and fast way to get my SQL or stored procedure results directly into my domain objects. I’m sold on Dapper for many of my high-performing pages. As we maintain our sites and find the occasional bloated LINQ2SQL resource hog, we’re swapping out the queries to straight SQL, stored procedures, and Dapper, and it has really sped things up.

Go ahead, give it a shot yourself. It’s available on the NuGet Gallery, and only imports a single C# file; no extra assemblies required. They’ve got plenty of examples at the project site. I’m wondering how I ever lived without it.

Coming to Terms With Baseless Merging in TFS

Team Foundation Server has never been friendly when it comes to the complicated love triangles that inevitably rise from wanting to merge between three different branches. The only option we have is, at some point, to create a common merge point using a baseless merge. These tend to be pretty finicky and if you don’t plan on it up front, are nearly impossible to deal with.

Our typical use for three way merging arises when we branch from the trunk into an internal developer branch, and at the same time create a partial branch (just views and content) for a third party design group who we want isolated from the rest of the team. We need to be able to merge our internal code changes to and from their branch. These scenarios are usually project-based, and we get by with the fact that we can create both the design branch and the internal branch at the same time. When you have that synchronization, a baseless merge can be done between the two new branches to get a merge history which sets you up for a project lifetime filled with happy merges.

However, we ran into another scenario the other day and I didn’t know whether we’d be able to handle it. Our client has a large codebase and our general strategy is to keep the Trunk in synch with what’s live or approved to go live (though we’re contemplating another “Production” branch which may make things easier). During the October to December timeframe, the busy season hits the websites as customers buy their product and development on our side slows down, at least on the day to day small projects. We can’t leave large projects in the trunk during this time and risk accidental deployment.

We will have several large projects going on which won’t be released for several months. One of these projects is to finally upgrade all our disparate systems to .Net 4 and MVC 3. At the same time, we’ll have at least one more large project separate from the upgrade, but looking to use a lot of the fun new MVC 3 functionality. We need a three way merge. We may have other projects coming down the pipe during these months as well, so I wanted to find a way to use baseless merging to assure that all new projects could be merged with the Upgrade branch without polluting the trunk.

Future projects won’t have the same exact starting point as the designer branch scenario, but I found a way to mimic starting from the same point. It goes something like this.

Assume you have Trunk and Upgrade branches which were branched months ago and each has a lot of changes since then. You need to branch from Trunk into the new project branch, Foo, but then to also get the updates from the Upgrade branch.

  1. View history on the Upgrade branch and find the changeset at which it was branched. We’ll call that X.
  2. Branch from the Trunk to create Foo, but branch from changeset X.
  3. From the command line, do a baseless merge from Upgrade to Foo, specifying changeset X and including the /discard option. This causes it to only create the merge history, which is fine because the code is identical when you specify changeset X.
  4. Now you’ve got your merge history. You can merge up from the trunk to get the latest of its code, and you can merge from the Upgrade code to get the latest of its code, and everyone’s happy.

And there you have it. Of course, now that I think about it, I’m wondering whether I’m over-complicating things. I bet I could get the same end result by merging the latest from Trunk to Upgrade, branching from Trunk to Foo at latest, then baseless merging from Upgrade to Foo and accepting all edits. For some reason, the bullet list above seems cleaner to me but they probably boil down to the same thing.

I guess the moral of the story is this: Baseless merging is going to be a nightmare if you don’t plan for it up front. The whole reaching into history for a common merge point probably has other uses as well, and I wonder whether something like this could also be useful to bring together two separate branches which weren’t baseless merged up front. I’ll investigate that another time. In the meantime, I’ll dream wishful dreams of distributed repositories like git and Mercurial, which live in a land where this type of thing is supposedly mundane.

Lazy Loading By Using C# Yield Statements

I keep forgetting about that fun little yield statement we’ve got in C#. It’s that one that lets you magically create an enumerator out of a normal method.

The other day I was feeling lazy. I had a bunch of views that all shared the same model which allowed for a bunch of different editors for different types of data. Some of them needed a list of features pulled from the database. The easy thing to do would be to just slap that feature query result onto the model, but it only needed to be used by a single view and it was an expensive query with some other nonsense happening to the list in the controller, so it was slowing down all other views unnecessarily.

I then remembered the yield statement and wondered whether it would lazy-load, hoping that my slow loading method would never be called unless the view actually used it. It worked!

static void Main(string[] args)
{
    Console.WriteLine("Test 1 (we're not enumerating this time)");
    var foo1 = GetFeatures();

    Console.WriteLine("Test 2 (just peeking in the enumeration)");
    var foo2 = GetFeatures();
    Console.WriteLine("First entry in list: " + foo2.First());
}
        
// this is the expensive method
static IEnumerable<string> GetFeatures()
{
    Console.WriteLine("GetFeatures was called");

    // just pretend this is an expensive query
    var expensiveList = new string[] { "a", "b", "c", "d" };

    //and pretend we're doing something more complex
    foreach (var item in expensiveList)
        yield return item;
}

And the output:

Test 1 (we’re not enumerating this time)
Test 2 (just peeking in the enumeration)
GetFeatures was called
First entry in list: a
Press any key to continue . . .

Of course, all this lazy loading nonsense could be skipped if we just built the query logic inside the model’s collection get method, but that wouldn’t be very MVCy, now, would it? I’d rather set the list in the controller.

They Might Be Giants and Jonathan Coulton

They Might Be Giants and Jonathan Coulton came to town a few days ago. I fell in love with Jonathan Coulton a few years ago when I first heard his rendition of Baby Got Back. The rest of his repertoire is one giant nerdgasm after another. It was through his website where I found out he was coming to town, and the Giants were icing on the cake.

It was a great concert, but I sure wish that Coulton’s set was longer. He couldn’t have played more than a half dozen songs before rushing off the stage to allow for another hour of setup. It would have been much better to just give him an acoustic guitar during that interim so we could get our money’s worth.

I’ve always teetered between being a big fan of TMBG and getting too annoyed with their music. I love their variety and quirkiness and the constant struggle of trying to figure out what the hell they’re singing about without going to their wiki. On the other hand, I hate to sound petty, but sometimes I just can’t get over the grating voices of the lead singers. Usually I’m fine with it, but there are some songs where the earsplitting nasality of their vocals is just too much and I have to take a break. I’ve never been a fan of Rush or the Smashing Pumpkins for the same reason, but I can dig the Giants. They’re worth the extra effort.

The crowd was definitely a new one for me. It felt like I was in a nerdy internet forum, with all the current memes being represented. At one point I realized that I’m probably just as nerdy as the rest of the bunch, because I understood and enjoyed a lot of the obscure humor. I guess I just try to hide my inner nerd. These people were flaunting it. Good for them.