Tsk. Snow is forecast for mid-week – and the local media are going insane over the possibilities of between 3” and 12” (7cm and 30cm) of snow.
Here’s the forecast:
Updates from “Snowpocalypse central” during the week.
I’ve got rid of five more servers out of the rack this weekend.
Web server – this was a nice fat server for virtualisation; now repurposed with a nice video card and it’s a media server for the home theatre.
Old media server is now dead. So long!
Old MP3/itunes/TVersity server is now consolidated with the new media server – the last Windows 2003 box is now gone! So long!
I also managed to decommission the old management server – the originally was running ZCM10 (natch) then ConfigManager then SCE. That’s all managed from the cloud – so that box is dead – along with the SQL server that was next to it. So long!
I’ll take a pic of the empty rack. It’s got two UPS, the archive server and the firewall – that’s it. Looks pretty lonely.
After a decade of static IP and reasonable bandwidth I finally ended my ‘experiment’ and ‘learning lab’ which was self-hosting.
Before we moved to the US I ran NetWare 5.1 from my WW2-era bomb shelter under my house in Nottingham. This was the home for my email (running NIMS – subsquently NetMail) and a semi-static list of resources, links and thoughts.
After the move to the US the server went through NetWare 6.5 and then on to Red Hat Linux 7, then to SLES 8. SLES 8 served as an admirable photo and notepad written blog platform for when kid #1 appeared in mid 2003. The handwritten weblog soon evolved to blogger.
In early 2005 the number of photos and the number of blog updates grew too large – and a multiple update to SLES 9 and WordPress was called for. NetMail moved from Red Hat to Windows Server and then on to SLES 10 (at the time showing little innovation, shortly after sold off) up to Google Apps.
The final incarnation of the blog server was running SLES 11 SP1 on top of Hyper-V/Windows Server 2008 R2. Still running WordPress and all of the various addons.
The server is now offline; the VM backed up – it’s going to be rebuilt as a media server.
The blogs and photos are all now hosted up on GoDaddy – and mail is hosted Exchange.
Most of the way through the infrastructure changes at the moment.
Step one – the mail switch was relatively painless – it needed some careful planning – but zero downtime.
As I wrote a couple of months ago – the old mail lives on at Google Apps. Everything new is in Exchange.
Step two of the move was more complex and painful. I decided to change the DNS hosting with a consolidation of the various registrars that I’d used over the past decade. What should have been a week-long process of sign-up, DNS unlock, auth code request and move – took most of my time.
I moved from register.com and Network Solutions (resold by Covad). Getting the domains unlocked and auth codes for the move were a snap with register.com – they were efficient, friendly, knowledgeable – and it took about five days. Covad was a nightmare. Total time – five weeks and multiple escalations. During that time Covad managed to completely screw up the zones too.
Step three is mostly complete too. Only one blog site to move – and the photos are uploading right now. This is really my frustration with GoDaddy. They have pretty good (i.e. I get what I pay for) hosting and infrastructure – but some of the grid hosting limitations and the associated responses from support are really frustrating.
The GoDaddy issue is that they either cycle the grid hosts (so an ssh/scp session is terminated) or they kill long running processes. With four photo blogs – some insane number of photos – total of some 80GB of data to move – I had to get creative.
Firstly copying the data via non-secure ftp wasn’t really my idea of fun. I started off with scp – but the remote host kept killing the connection. Next I tarred up the needed files – and the connection was killed. The final working solution to get the pictures up to GoDaddy was the convoluted tar – md5sum – split – scp – cat – md5sum – untar. Moving 80GB in 200MB chunks with a retry script at my end was not fun.
The next issue was actually untarring these enormous tarballs. The first site unpacked just fine; the second kept being interrupted – i.e. tar was getting killed. There is no ‘nice’ on the server – so no way to fly below the radar. Turns out there is a process time limit of something like 180 seconds. This means that the practical limit to untar is about 13GB in size. My frustration with GoDaddy support was that they kept telling me to use ftp and that there was a limit of 100MB for tar. I spoke to GoDaddy support right at the start of this process and offered to PAY to ship a USB drive with the 80GB of tarballs to an admin to dump onto my space. I’d say there’s a value add here for GoDaddy.
Change control and planning are king. See previous posts. Nothing went wrong – but there were things that could have been smoother. What I guessed was a few weeks turned into a two month project.
Test with real-world datasets. Migrating a test blog with 200 photos isn’t a valid test.
First line support people often repost from the knowledgebase. A limit of 100MB for tar is unrealistic. Tell people it’s a time-related kill rather than a size issue. We can figure it out and workaround it.