Category: System Administration


Hazel 5 Launch Postmortem

December 20th, 2020 — 4:57pm

As promised, I thought I’d write about my launch. While not disastrous, it had its share of bumps. I had hoped that I had learned something from the Hazel 4 launch four years ago. One of the issues was server capacity. This year, I deployed an extra server. It was an asymmetrical setup, with my main setup handling the website and doling out free upgrades to recent purchasers, while the second server handled the store. Unfortunately, it wasn’t enough. The store still got swamped. Given how busy I was handling requests and trying to troubleshoot other issues, I didn’t have time to test and deploy yet another server, plus they would all be hitting the same database so it was unclear if it would help much. Given that this type of load tends to subside within the day, I rode it out.

One thing I could have done to help alleviate this was to spread out sending messages to the mailing list. Before sending anything to the list, traffic was quite manageable. Once the list got blasted, so did my site. I’m not sure if my email campaign provider supports it, but sending messages in chunks or just slowing down the sending rate would have probably minimized the problems.

Then there were packaging issues. Hazel is codesigned and notarized yet on some people’s systems, it would reject Hazel, either wholesale or in parts. This is worth a whole post on its own so expect one later. Suffice it to say, I did fix some of the issues and came up with workarounds for the others.

And finally, there were actual issues with the software.

First was a bug with trial mode expiring soon after install. I had Hazel reset the trial period for people coming from a previous version but it contained a bug which I did not catch. I did have a beta period but the version used then accepted Hazel 4 licenses, which meant that trial mode was not tested.

The other major bug was black backgrounds appearing in some views on 10.13. I take total responsibility for this as I did not test for 10.13. I did have a 10.13 partition on a drive I keep with various macOS versions, but it got nuked by an early Big Sur beta install which went awry. I tried reinstalling but my installer was corrupt. Add to that that Apple doesn’t allow you to download old installers and you can see how this fell by the wayside as other issues came up. It is ultimately my fault and my apologies to those running 10.13. As for the issue, it seems that 10.13 has problems with certain named/system colors when the app is linked against 11.0. Solution was to do special-case code for 10.13 using non-named colors.

There were plenty of other bugs but those were the most apparent and the ones I had to address quickly.

And with all of the above issues, I had to deal with thousands of people reporting them. Especially in the first few days, it was a frantic balancing act of being responsive to users while trying to carve out time to investigate the issues they were reporting. Logic would dictate stopping the bleeding first (i.e. investigate and address the problems) but it’s hard to ignore the huge number of messages piling up. Some would say that having all that attention would be a good problem to have, but when it was happening, it sure didn’t feel like it.

Lessons to be learned:

  • Try and slow down or space out announcements. Having everyone find out at once is asking for trouble. One idea I toyed with before launch but didn’t implement was to have a preview for those on the mailing list. Have a separate store that was available early where they could purchase an upgrade before the release to the public at large. That might have helped with the initial crush.
  • When running a beta test, be mindful of the holes in your testing, including differences between the beta and final product and missing demographics in your pool of beta testers.
  • Keep your priorities straight. Not everything needs to be handled immediately. It’s ok to ignore stuff.
  • Accept that no matter how much you prepare, you are never fully ready for what comes next.

Oddly, I found that the press was noticeably absent. It seems that even though the Mac market keeps growing, there are fewer and fewer outlets reporting and reviewing Mac products. Hazel has enough of a following that it didn’t matter as much but it feels as if things have regressed on that front, which is a bit sad.

Next time, I’ll be talking about my journey into the nightmare world of code signing and notarization. Fun times to be had by all. Until then…

2 comments » | Hazel, Noodlesoft, Software, System Administration

Adventures in Email Hosting 2

May 19th, 2016 — 3:10pm

Yes, another journey into the world of email hosting. This time it’s a bit different. Instead of receiving email, today we talk about sending it.

It seems that during my recent launch I became collateral damage in the war against spam. Between a couple of email campaigns and a bunch of license emails, I had sent out a good amount of email; enough to cause me a ton of headaches as certain mail services, actually, mainly one: Gmail, decided to mark a lot of it as spam (if I was lucky) or make them disappear entirely.

The way things were set up, my email campaigns were sent out by Campaign Monitor, my license emails by my server and my regular “interactive” emails via Rackspace’s email service (as described in a previous installment). The ones I had the most problems with were the first two, as they were repetitive messages sent out in volume.

For a while, I’ve had SPF records set up. What are SPF records? In a nutshell, it’s a way for you to specify which mail servers are the “official” servers your email comes from. This is to help identify mail coming from you as opposed to a spammer posing as you. The way you set it up is to create special DNS TXT records listing out the specific servers for your domain.

Apparently, this isn’t enough. Seems like there’s another layer you can implement: DKIM. With this, you have your mail server sign outgoing emails so mail servers at the receiving end can know that emails are definitely from you and definitely not, once again, from a spammer imposter. So, I went ahead and set up OpenDKIM on my server. You can find various guides out there on how to install it on your OS and integrate it with your MTA (I use postfix and hooking the two up was pretty easy). You also have to add a DNS record listing your public key so other mail servers can verify your signatures.

Even after doing that, it still didn’t seem to appease the Gmail gods. I found this page which recommended yet another thing: DMARC. Here, you specify a policy as a guideline to mail servers on how to handle your email. One of the things you can specify is an email address where mail services can send you reports on the emails you send them. And you guessed it, you implement it by creating a DNS record.

Being desperate, I thought I’d do it, hoping that Gmail would send me a report telling me what I’m doing wrong. Soon after, I started receiving DMARC reports from all sorts of mail services (Microsoft, Yahoo, AOL, etc.). Over ten days later and guess who still hasn’t sent me one.

I’ve been getting fewer reports about not receiving emails lately but that’s mainly because of decreased volume since launch. It’s still unclear whether Gmail is binning my emails at a high rate or not. Nonetheless, if sending out emails are an important of your business, I recommend doing the above. Even if Gmail seems to be hard to please, other mail services are more appreciative of the gesture.

Note that there is also the option of relaying all my mail through Rackspace. It’s still a possibility but (a) I’m afraid of poisoning the well since my email is already being marked and (b) using a shared relay opens you up to being blacklisted because of someone else’s misdoings. All in all, I feel that some level of redundancy is ok here.

When implementing the above, you can check the headers of an email received at the other end to make sure everything is set up properly. Here’s one from an email sent from my server to my Gmail account:

Authentication-Results: mx.google.com; dkim=pass header.i=@noodlesoft.com; spf=pass (google.com: domain of www-data@noodlesoft.com designates 2001:4801:7824:103:be76:4eff:fe11:5179 as permitted sender) smtp.mailfrom=www-data@noodlesoft.com; dmarc=pass (p=NONE dis=NONE) header.from=noodlesoft.com

As you can see, it shows that my SPF and DKIM passes. That doesn’t guarantee anything but it helps.

You can also check out Google’s Postmaster tools site. It will give you feedback on various metrics concerning email from your site. To set it up, you have to create a DNS record (see a running theme here?) with text it supplies you so that it can see that you control the domain. After that, it will track your site.

Also, yes, DNS once again: make sure all your regular DNS records are set up properly. Not only do you want an A and PTR record for IPv4, but also a AAAA and PTR record for IPv6 as more mail servers nowadays are checking for that.

Until next time, here’s hoping I don’t have to resort to human sacrifice to get Gmail to accept my messages.

Comment » | Business, System Administration, Tools, Web

Cleaning Out Turd Files

May 17th, 2016 — 12:37pm

A little pet peeve of mine are the turd files that Emacs leaves behind all over the place.

For those that don’t use Emacs, when you edit files with it, it will keep a backup of the original file in a file with the same name, except with a tilde (~) at the end. The problem is that you have to manually clear them out. When you are jumping around in Terminal editing config files left and right, trying to get something working, you tend to forget where those files are. And yes, you can turn that off, but I like having the backups and yes, you can specify that it store its backup files in a common place but it makes it harder to recover files should I need to check the backup version, especially when I’m editing a bunch of files with the same name (index.php anyone?).

Since Hazel before version 4 only operated on actual folders/directories in the filesystem, it couldn’t really handle cases like this where the files are strewn all over the place. Now with Smart Folder support, you can create one to match all the turd files and have Hazel clean them up for you. Here’s a Smart Folder and Hazel rule you can use yourself:

Turd Cleanup

One important bit about the Smart Folder conditions is that I use “Filename” and not the more readily available “Name” or “File extension” attributes as the ~ is tacked onto the end regardless of whether there is an extension or not.

And of course, no reason to limit it to Emacs turds. Go ahead and edit the Smart Folder to include whatever other poops you have on your system.

Comment » | Hazel, Software, System Administration

Moving Off Of Feedburner

May 10th, 2016 — 9:37am

I’m finally going to start transitioning off of Feedburner. I haven’t done this before so I stand a good chance of screwing it up. From what I understand, when I switch things over, Feedburner should do a redirect to the new feed. If you find that it doesn’t work, you can always re-subscribe by going to the site.

As for where I’m taking the feed, I’m going back to hosting it myself via WordPress. There seem to be plugins that do the minimal statistics collection that I care about and I’m tired of using services that just end up shutting down.

Hopefully, you won’t notice a thing but I thought I’d post just in case something goes wrong.

Comment » | System Administration

Hazel 4 Development/Launch Post-Mortem

May 5th, 2016 — 7:37pm

It’s been a bit hectic because of the launch yesterday but I finally have a moment to post. Yes, Hazel 4 is finally out. You can find the release notes here.

Development was a bit rocky. I played with a bunch of different features but some of them didn’t quite pan out in a way that I liked. It felt like wasted effort in that that work didn’t result in a usable feature but many of them were only shelved temporarily. Oftentimes I end up having that eureka moment which would allow a shelved feature to be finally realized so something to look out for in future point releases.

That said, I’m happy with the features that I did get to work. They seem simple on paper but involved a bit more thought than would be expected. Sync is always tricky and getting the preview feature down to something as simple as it is now took a little doing.

Along with that was the site re-design (courtesy of the folks at Brotherhood). The previous site was mostly static. Adding content involved editing raw html pages and adding them. It was enough to discourage me from doing it often and discourage me it did, as I ended up leaving the site very outdated. The new site is backed by WordPress which will hopefully remedy that. The point here being that content can be added more easily using tools like MarsEdit or WPs web editor. I’ve already added a few posts (a review and a couple of knowledge base articles) since the launch.

Also, the new site design is a bit more stripped down and streamlined. I’ve tried to reduce navigation in favor of search. Most of the site is searchable via the form on the support page so I recommend going there first and doing a search if you ever have a question about Hazel.

 

The launch itself went relatively smoothly (except one incident – more details below). I can credit most of this to using a VPS (virtual private server). VPSes are great as you can clone, rebuild and resize them as needed. It gives you an amazing amount of agility when deploying servers.

Before the launch, I set up a clone server so I could set up and test the new site. Since it’s a clone, no need to reinstall and reconfigure everything (though you do have to make some changes in places where the IP address or hostname is stored). You end up with pretty much an identical server to play with which went a long way towards making sure things were working properly.

When I launched, I transferred the new stuff over to the live server. That part went with little drama but then disaster happened: I underestimated the load from tons of eager customers. The problem was that I had sent two email campaigns. One to those on my mailing list and another to those who purchased recently. The latter group received a message with instructions on how to get their free upgrade license. And guess what all those people decided to do immediately upon seeing that.

The result was that the site got slammed. More specifically, apache was overloaded. Enter VPS awesomeness #2: I was able to resize the server on the fly. It took a little while (maybe 15 minutes though it felt much longer) but the old server was still able to run, albeit very sluggishly, until the last minute when the conversion finished and it rebooted. After that, the site ran like butter and it was smooth sailing (at least as far as the server went).

 

Aside from some minor glitches (version 4.0.1 released this morning should address some of them), the launch has been pretty great. I just had the best day in sales in Hazel’s history so I’m pretty happy about that. My thanks to everyone who contributed, including Brotherhood, Jono Hunt for his icon and UI work, my beta testers, my friends in the Mac dev community and of course, all my customers who’ve been very supportive of Hazel all these years.

Comment » | Business, Hazel, Software, System Administration, Web

Adventures in Email Hosting

December 17th, 2009 — 12:06pm

As I’ve mentioned before, Noodlesoft’s online operations are happily hosted on Slicehost. A while back, I migrated my site and databases from DreamHost to Slicehost and haven’t looked back. Ok, well, not exactly. I didn’t migrate my email hosting over. There were several reasons for this:

  • Setting up and maintaining email software with all the features you want is a bit of a pain.
  • I wanted redundant MX servers. Getting another slice to do that is a bit costly.
  • I didn’t want email service to go down with my web site.

For the time being I left email on DreamHost. It’s a hard habit to kick. They provide tons of bandwidth and disk space for cheap. Of course, given enough time, DreamHost will disappoint you and disappoint they did. They changed the MX servers on me without notifying me. Since I have my DNS hosted elsewhere, I need to be warned ahead of time so I can do the proper DNS change. Not only that, there didn’t seem to be an overlap so mail going to the old MX was being bounced. Not acceptable. It was at this point I decided to get all my business operations off of DreamHost. It only drilled the point home when DreamHost would sporadically fail to resolve some mail aliases for a while afterwards.

What was I looking for in an email hosting provider? Several things:

  1. Reliability. This is where DreamHost seems to falter time and time again. With DreamHost, not only do things seem to go wrong more often, but their handling of the situation is unprofessional.
  2. Easy migration. This seems to be a bit more commonplace now. Providers now have a way where you can point their server at yours and have it suck out all the mail and folders. Even better if you can do it incrementally after the initial run to catch the extra emails that end up going to your old MX during the DNS transition.
  3. Support. I expect support to be accessible, responsive, competent and diligent. This is one of those times where I want to pay. With free, you have no leverage to get them to fix a problem in a timely fashion. This is my business here and I can’t afford to have unresponsive and unhelpful support if the service goes down.
  4. Features. This will be different for everyone but for me, I just need basic things like SSL for inbound (IMAP) and outbound (SMTP) and server-side rules/filtering.
  5. Reasonable resource limits. Some providers limit your bandwidth (both inbound and outbound) and all put limits on disk storage. The limits should be high enough that I don’t notice them.

Google Apps is an obvious choice but frankly, I don’t like how Gmail does IMAP. In addition, I was very unimpressed with Google’s support with Google Checkout and expected the same type of thing with their other services. See point 3 above.

I decided to try FuseMail. I heard some good things about them and pre-sales support was responsive so I signed up for a trial. Unfortunately, it didn’t work out. There were some odd problems here and there but ultimately it was because of unspoken usage limits. They flagged my account when I tried to do a full IMAP sync to my desktop mail client. Their claim was that I was using too many connections and suggested I should set Apple Mail to not download all messages. They violated point 5 in my list so I cancelled my account. Also, while their support was responsive, they weren’t actually always helpful. They seemed to be oriented towards getting a response out quickly instead of actually solving my problem.

I considered FastMail. Poking around on their site, it seemed to be a good fit for features but it felt like they were trying very hard to discourage you from contacting them. Even if you have a ticket system in place, not everyone that needs to talk to you is already a customer. I was turned off by their support pages and put them on the backburner.

I then poked around the Slicehost forums seeing what other people may be using. I discovered that Slicehost has a special deal with Rackspace (who own Slicehost now). I went to their site and was able to have actually useful conversations with sales and support people. I started my trial.

The migration seemed to go smoothly, albeit a bit slower than Fusemail’s migration tool. After the migration was done, I found a problem: all the dates on my messages were set to the migration date and not the actual original date received. After contacting support, I learned that their tool was resetting the IMAP INTERNALDATE which Apple Mail uses as “Date Received” (most other clients use the sent date in the mail header). After some tests with them, they finally fixed the problem internally. They performed a special migration for me and then I pointed my DNS at Rackspace’s servers. It’s unclear to me if/when this fix will get into the main migration tool; if you are considering signing up with them, I suggest contacting their support and asking first.

The verdict: I’m quite satisfied. After the setup hump, things seem to be running smoothly plus the Rackspace servers seem noticeably faster than DreamHost. A recent DreamHost-wide outage made me feel like I jumped ship just in time. I’ll probably keep my DreamHost account around in the short term for some random personal projects (which are expendable) but I’m glad that no part of my business is hosted with them anymore.

• • •

And before I go, I thought I’d share a procedure for those looking to migrate mail servers. Unfortunately, some of these points I came up with after a screw up or two on my part but now you get to benefit from my mistakes:

  1. If you are hosting DNS, turn down the TTL on your MX records. The TTL (time to live) is an indicator as to how long clients can cache this record. You want to turn it down so that when you switch to the new server, the cache won’t be stale for as long.
  2. On your new server, set up all accounts.
  3. On your new server, set up all aliases. Make sure they point to a valid address.
  4. On your new server, if the option is available, set up a catch-all address. This address will receive emails that are addressed to non-existent mailboxes/aliases. If possible, route it to a folder or account that is not used for anything else. Even if you don’t want this on normally, turn it on now until you are sure that all mailboxes and aliases are set up properly.
  5. Test locally. Check with your provider but some should shunt local mail without going to an external MX server (so you can do this before switching DNS). Make sure none of your aliases bounce.
  6. Do the migration. Depending on the size of your mailboxes, this may take some time (letting it run overnight is a good idea). Make sure to verify afterwards. Remember to check those dates!
  7. Update your MX records to point to the new server(s).
  8. When you feel comfortable that things are working ok on the new server, do a final migration to pick up any messages that arrived during the overlap period.
  9. Cancel old account.
  10. Pour yourself a scotch.

Note that steps #1-9 are optional.

Update (Dec. 18, 2009): Oh the irony. A day after I posted this, Rackspace had an outage. One the email hosting side, it seemed to only affect clients connecting to access their accounts. Their mail servers were still able to accept messages during this time which was the more important thing for me. Of course, I’d prefer stuff like this not to happen and time will tell how often it does happen. In the end, it was handled well and support was very quick to answer my follow-up questions after things settled down.

Comment » | Noodlesoft, System Administration

413 days

June 19th, 2009 — 1:42pm

No, not the number of days since I last posted, though yes, it’s been a while. I just happened to be looking at my server stats and noticed that I had an uptime of 413 days. I guess this post would have been more timely and poetic at the 1 year mark but I have to say that I’m pretty impressed with Slicehost (warning: it’s affiliate link so if you sign up using that link, I get some credit). The reboot of my slice way back when was when I upgraded to a bigger slice.

I’m sure other people on other providers can post similar numbers but seeing as I had come from DreamHost, I find it pretty amazing. And yes, that is an affiliate link as well as I still use them for other things – I can be a whore at times, too.

Looking back over the past 413 days, I only recall contacting Slicehost support once, and that was for an administrative issue. I do remember some network problems once but those were resolved within minutes. By the time I asked around in the IRC channel about it, it was fixed. For the most part, I almost never think of Slicehost. The fact that I can take them for granted says something about their reliability.

How have the other services and tools I’ve been using on my site fared during this time?

PotionStore has been great. And now that it has an in-app Cocoa store component, it’s even better. I’ve currently integrated it into the latest Hazel beta (forum account required) if you want to see it in action. Just keep in mind that while the app is beta, it is connected to the live store so all sales are real.

Between the two main transaction processors I use, PayPal has been far better than Google Checkout. Very few issues with the former (knock on wood). Unfortunately, when there has been an issue with Google Checkout, I’ve had to hunt to find a way to even contact them and then the email support has been pretty crummy. On the flip side, I can find PayPal’s phone number quickly and their support people seem very knowledgeable and when the call is over, the issue is resolved. Fortunately, Google Checkout accounts for a small number of sales.

For server monitoring, I have been using Montastic. At least, I thought I was. Recently I checked my account only to notice that it wasn’t really monitoring. After unwedging it, it seemed to not like my store certificate, bugging me with alerts regularly. It also seemed to be sending MIME mail of some sort which end up as MMS on my phone. They’ve got pics of my server on fire or something? Annoying and potentially expensive. I’ve disabled it so suggestions for an alternate server monitoring service are welcome.

I could end this post with “Here’s to another 413 days” but I know I have to do a server upgrade at some point which will break my streak. Nonetheless, it’s good to know that downtime occurs on my terms and not my provider’s.

4 comments » | System Administration

Maintenance: Shady Characters

January 13th, 2009 — 5:44pm

As you may or may not have noticed (more likely the latter), this blog was down for a chunk of the afternoon. I had to fix something, and, well, it took a bit longer than usual. You may have noticed that you’d see garbage characters like “ö” pop up in posts and comments. That’s because some time ago a WordPress upgrade changed the character encodings. I didn’t consider it a high priority issue and let it sit until now.

Following this article, I converted everything over only to realize that none of the actual characters were converted properly. Instead of trying to debug SQL scripts that could potentially destroy all my data, I went through and edited every character encoding screw-up by hand. It wasn’t so bad with my posts since I pretty much remember what I put in there. Fixing user comments was a different matter. Being on a perfectionist tear, I used the Wayback Machine to find the comments before I performed the fateful WP upgrade just to figure out if somebody used a smart quote or an em-dash. Fun.

Hopefully everything is back up and fixed. If you notice any other garbage characters floating around, please post here so I can fix it.

And yes, I’m overdue for a real post. All you have to do is ÃâπÀìâ,öå¢Ã,Å,ìãâπÃ∫, and I just might be compelled to write something.

1 comment » | Noodlesoft, System Administration, Web

Passenger On Board

July 22nd, 2008 — 7:47pm

I just switched PotionStore to use Phusion Passenger. Also known as mod_rails, Passenger is an Apache module that allows you to run Rails with Apache. Unlike other Apache plugins like mod_php, your application is still run in separate processes. Previously, I had been using Apache as a proxy to a mongrel cluster. On the surface, this doesn’t sound much different but Passenger does give you a couple things:

  • It maintains the pool of Ruby processes for you. It can adjust the pool dynamically as needed in case you want to reclaim memory when it is not busy, for example. You don’t have to worry about setting up and maintaining a separate set of servers like you do with mongrel. It gets restarted with Apache and you can also trigger it to restart just the Ruby stuff. One less thing to administer and monitor.
  • Lower memory footprint if you use Enterprise Ruby (also made by Phusion). It will share resources between the Ruby processes.

Luckily, Andy Kim already played guinea pig and tried it out to make sure it worked. Many thanks to him for that (and for the whole PotionStore thing to begin with, of course).

While the setup was fairly simple, I ran into a couple odd issues. For one, the Enterprise Ruby installer seemed to screw up the permissions of some of its files. All of its .so files and a directory or Ruby file here and there were set to be only readable by the owner. Make sure to check this before deploying. Note also that it installs as a totally separate Ruby installation so run its version of gem to make sure your Ruby packages match what you had on “regular” Ruby. For those of you are running PotionStore, make sure to do a rake rails:update otherwise it’ll bomb and log a message telling you to do so.

Unfortunately, I didn’t record the memory usage beforehand so I don’t know the exact gain. Based on my recollection, it does seem like I have maybe 20M or so more than I did before (for two Ruby processes). One odd thing I’ve noticed in my graphs is that my interrupts and context switches plummeted immediately. Not sure why that is but it seems like a good thing to me.

While this doesn’t remove Rails’ lack of thread-safety problem (resulting in a separate process per request), it does at least make the deployment much, much easier and with the memory savings, a bit more scalable as you take less of a memory hit with each extra Ruby process. Especially for those of you that have not deployed yet, this will save you a bit of a headache in configuration (no proxy and mongrel setup). It’s only been up for a couple days so it may be too early to tell but so far it’s been running fine.

Comment » | Ruby on Rails, Software, System Administration, Web

Any Way You Slice It

May 2nd, 2008 — 2:17pm

It’s been over a month since I’ve been with Slicehost so I figure it’s enough time to make an assessment. Especially now that the MacUpdate promo is over, I actually have a sense of how well things hold up under load.

For those who don’t know, Slicehost is a hosting provider. What sets them apart from shared hosting providers is that they provide you with what they call a “slice” (other similar providers may call it a virtual private server or VPS). What this is is a virtualized server of your own. From your perspective, it’s like getting a dedicated server. You choose what OS you want (which right now consists of different Linux distributions) and you get root access so you can do whatever you want.

It differs from shared hosting in that your slice is like it’s own machine. It gets a guaranteed amount of memory and CPU so even if your neighbor is a hog, it won’t affect your slice. Because of the way that the Xen virtualization works, it is impossible to oversell on capacity.

Compared to getting a dedicated machine, it’s much cheaper and you aren’t tied to specific hardware. Blown power supply? Not your problem. I don’t know exactly what they do in this case, but I imagine they can move your slice to different hardware as needed.

Set up

You can read my original report on getting set up. You are expected to set up and administer the slice yourself. If you have the inclination and need a high level of control, then this is probably for you.

It only took me a weekend from getting my slice to having everything migrated over. Of course, being the tweaky type that I am, I spent some days playing with it and optimizing it. One of the benefits and dangers of having full control.

Upgrading

When I launched the new store, there was a problem. Ruby on Rails is a memory hog and as a result, I needed to upgrade to a bigger slice. Fortunately, Slicehost automates all that. Just log into the management console and request the larger slice. It takes a little while for the slice to get prepped but during that time your slice is still up. The downtime for the reboot was short (less than a minute) and that was it. The fact that it’s automated is a big deal to me as it means I can do it on the fly without doing a drawn out back and forth with a support person.

Traffic

As you may or may not know, Hazel was included in the recent MacUpdate bundle. Before the launch, I upgraded my slice again to 1024M in anticipation of the load. Turns out, this was unnecessary. The 1024 slice never broke a sweat. The load went up briefly to 0.4 once. Apache connections stayed below the upper limits of what a smaller slice would have been able to handle. Traffic was about 150K requests a day at its peak. I don’t really have a frame of reference for that except that it’s a good bit more than I usually get. In short, the 512 slice would have been able to handle it fine. The slice has performed better than with my previous providers and I haven’t noticed any slowdowns or downtime (except for when I restart things for maintenance). With the promotion over, I’ve downgraded my slice and things are still running smoothly.

• • •

After all this, I’ve only contacted support twice. Once in the beginning just to say hi and once today for an issue that ended up being an Ubuntu thing. In both cases, I received responses within the hour. Granted, I’m doing a lot of the things that support at other hosting services would do for you. It’s a trade-off between effort and control and I’m at that point where I need more of the latter. For the things I really care about, keeping the machine and network reliable, there has been nothing to report, and that is how it should be.

It’s too bad most providers price on bandwidth and storage space. I would have happily paid more per month if I could get higher availability and reliability (with the ability to run RoR – sorry Pair). Of course, with everyone claiming 99.999999% availability, it’s hard to differentiate oneself on this front so providers seem to just pile on the bandwidth/space like extra gravy hiding the bad meat.

I feel like VPSes are the future of hosting. The amount of computing power that you can cram into a 1U rack space is far more than most of us need or want to pay for. But virtualize it and divvy it up and you have a great scalable model for doing dedicated hosting. It’s probably greener too but I’ll let the hippies make a determination on that.

2 comments » | System Administration, Web

Back to top