Today I was reading about a snafu over at GitHub, where an error caused an email to go out to way more people than intended, and was reminded of my own embarrassing brush with mass email failure.
In the early 2000s, I had my first real job running communications for an organization in Canada – the Canadian Convention of Southern Baptists. I was given far more responsibility than I probably should have been trusted with, as this story will no doubt illustrate. One of my early tasks was building an email newsletter system to manage our growing list of contributors, leaders, pastors and so on. In line with the times, I gave it a catchy internet name – .communicate. (And yes, the .com portion was in bold. I told you it was catchy!)
At the time, sending email newsletters was hard. The plethora of options available today (like Mailchimp and Campaign Monitor, to name a few) were not yet around. There were some off-the-shelf products available, but they were expensive and I was young and cocky and wanted to build something myself anyway.
So I set off building a subscription system coupled with an administrative panel where our staff could create new email campaigns, schedule when the email should be sent, and select which of our lists the email should go to. As was the case then and now, I spent days checking all the different email clients to ensure our designs looked halfway consistent throughout. In the beginning, things worked wonderfully. The lists grew steadily and we had a constant stream of emails going out each week.
A few months later I was in Atlanta for a meeting. I woke up in my hotel room, flipped open my laptop and noticed something odd in my inbox. A new email from the .communicate system that I was certain I had received the day before was back again. I looked back in my trash and sure enough, there it was.
Now, I am not a morning person. My wife can tell you story after story of times when she has asked me a simple question before 9am only to be left worrying about my health due to my incoherent answer. So for a few minutes I sat there staring at my inbox trying to work out how that could have happened.
Maybe the staff member who set it up accidentally scheduled it twice?
No, nothing in the database to suggest that.
Perhaps this hotel wifi is screwing with my email?
I could only be so lucky. Like any good developer, I chose a strong course of action. I trotted off to my meeting hoping the problem would just go away on its own. Surely some sort of anomaly that could only be explained by the internet gods had caused this, and all would be well tomorrow.
The next morning, I woke up in the same hotel and started over to my laptop.
Wonder if that email thing is still going on?
Before I could get there, my phone rang. It was one of our team members at the home office in Calgary. "Brad, we have a problem", they said as I opened my laptop to see that yes, the email had gone out again. In fact, it was going out to our largest list – thousands of people had received the same email for three days in a row now.
After yet another check of our system to make sure nothing was amiss there, I became convinced that something was wrong with our web hosting provider. We were using a bargain service out of Toronto, so getting in touch with them was difficult and getting them to admit they were a part of the problem was nearly impossible. I suspected that this particular email was stuck in their mail queue, so it would continue to go out indefinitely until we found a solution. Of course, they saw nothing wrong.
I spent the next 4 days fighting on two fronts. Our staff – not to mention the list's subscribers – became increasingly frustrated as the email continued to go out each morning, and our hosting provider became increasingly unavailable to help us troubleshoot. To make matters worse, I was due in Calgary for a meeting on day 8, so I was a pretty popular guy when I showed up at the office.
You know that email went out again this morning?!
We had so many complaints that we actually sent another email out instructing people to block our newsletter's email address and that we would be sending from a new address when things were back to normal. So yeah, we were in the tall grass.
Enough was enough. Convinced that our web host was to blame, our IT guy and I decided that we would transfer all of our web properties to a new host immediately. Normally, that's not the sort of thing you would do on short notice, but we were desperate. We spent half the night getting setup on a new host and transferring everything over. We flipped the DNS switch just after midnight and cancelled our hosting account at the old provider, so I thought that maybe, just maybe it would be over.
The next morning, the email came again.
Maybe the DNS just needs to propagate.
The email came again for another 2 days. I was beginning to feel like Bill Murray in Groundhog Day. Just switch the annoying alarm clock for the email that wouldn't die.
Out of options and patience, I did the only thing I could think of. I called the old hosting provider, got their answering machine, and left a message that if they did not respond, I would get our legal team involved (we did not have a legal team that I was aware of). It was a bluff, but it worked. They called back in an hour, promised to disable all services associated with our cancelled account – including the mail servers – and whaddya know, the email was never heard from again.
Sometimes we bristle at the experience requirements listed alongside job postings. "How am I supposed to get experience without experience" is the common refrain. But this story reminds me that it is indeed beneficial to learn some hard lessons in front of smaller audiences along the way. Luckily, we had some relatively patient subscribers at the time, it certainly could have been worse.
I love to hear stories like this when talking to other people. Don't tell me about the great things you've done. What's your biggest screw-up? You can probably learn more from a soldier's scars than you can from his medals.