You are currently browsing articles tagged Tech.

VPS hosting is available, affordable, and just a little bit scary.  In order to alleviate some of risk that is taken when moving to a VPS that is not under our benevolent control, we need to set up a reliable data backup solution.  The setup that I incorporated involves a server in the cloud backing up to my account.  To accomplish this on CentOS 6, you can just use the following commands in a script that is executed periodically:


echo “Files backup to cloud”
eval $(gpg-agent –daemon)
export PASSPHRASE=””
export FTP_PASSWORD=”your password”
echo “Files backup to cloud”
duplicity –use-agent –encrypt-key YOUR_ENCRYPTION_KEY –full-if-older-than 4M /var/spool/duplicity/ webdavs:// && duplicity –use-agent –encrypt-key YOUR_ENCRYPTION_KEY remove-all-but-n-full 4 –force webdavs:// && duplicity –use-agent –encrypt-key YOUR_ENCRYPTION_KEY remove-all-inc-of-but-n-full 2 –force webdavs://
echo “Database backup to cloud”
duplicity –use-agent –encrypt-key YOUR_ENCRYPTION_KEY –full-if-older-than 4M /var/spool/holland/ webdavs:// && duplicity –use-agent –encrypt-key YOUR_ENCRYPTION_KEY remove-all-but-n-full 4 –force webdavs:// && duplicity –use-agent –encrypt-key YOUR_ENCRYPTION_KEY remove-all-inc-of-but-n-full 2 –force webdavs://


These commands will instruct duplicity to make a full backup every four months (–full-if-older-than 4M).  When this script runs in the in-between times (say 2 months after the last full backup), it will create an incremental backup.  Duplicity will keep four full backups.  This is specified by the remove-all-but-n-full 4 directive in the command.  Specifying remove-all-inc-of-but-n-full 1 tells duplicity to remove all incremental updates except in the case of the last two backup set.  A backup set includes the last full backup and its incremental backups.  Yes, this is a bit complicated.  Yes, it is worth it.

The end result is this: in sixteen months, there will be four full backups (a new full backup is created every four months).  The newest two of these will have the incremental backups as well as the full backups.  This way, file history will be completely preserved for the most recent eight month period.  If we are really desperate, we can recover a file that twelve months old or sixteen months old.

We need to consider what happens at month seventeen.  We will have four full backups, the oldest full backup will be seventeen months old.  We should not expect to be able to recover files that are over sixteen months old.  When we continue on to our twentieth month  the oldest full backup will be removed and we will be back to only having sixteen months of recoverable data.  Always think of your maximum time to recover a file as (–full-if-older-than 4M * remove-all-but-n-full 4 == 4 months * 4 full backups = 16 months of full backups).

Storing four full copies of the data on a backup is (most likely) overkill for what I am doing.  These options were added when I decided that a full backup on another server lacked one thing: files that were deleted on the server were never deleted in the backup location.  Then, of course, my mind wondered into using rsync instead of duplicity.  It’s true, if all we need is a single backup of the files, rsync can provide a better solution (I do this with my personal photos and videos at home).  However, the point in time snapshot duplicity provides can be used for forensics and change tracking.  This solution as scripted above isn’t one size fits all.  Use the tools and settings that work the best for you.

This setup absolutely saved this site and the other sites I run.  The SAN that this VPS (at the time of this writing) is running on became corrupted and all of the data would’ve been lost had I not used an external site for backups.  If you are someone who is worried about the security of your backups, remember that duplicity has an automatic encryption algorithm built in.  Security of your data is there, just make sure to backup your keys so you can read the files.

To conclude: backups can save you important information; the cloud is a great place to store your backups; duplicity is a great tool that can automate this process.

Tags: , ,

Alas, poor me. Given an IP address that may modify itself over time. A web address would have to be altered time and time again in order to point to my home server. Thankfully, there are sites that will host a domain name and accept updates when they are notified of IP address changes. This is commonly referred to as Dynamic DNS

There are dozens of Dynamic DNS services on the Internet. To pick the right one, I gathered some requirements of my own:

  • It must be supported by OpenWRT’s DNS scripts  (,,,,
  • It must be free to use (changeip is only commercial, so it is out)
  • It must be able to use domain that I have already registered instead of using their own domain names(down to and

Right now, I am trying out both and  Here are some pros and cons with each service.  Some of these detail were unexpected.  Hopefully, this will help others who are facing similar problems.

When a domain is put on, other registered members of the service are free to create subdomains off of your domain.  This is nice for people who are doing the subdomains, but horrible if you are trying to keep any type of brand consistency for your domain.  Someone can take and put whatever they want there.  In order to hide the domain from other users, a fee of $5 monthly must be paid.

The part about that came as a present surprise is the update URL.  It doesn’t contain the account’s password.  Instead, it has a unique key in it.  That way, if the key to perform an update is compromised, the worst that can happen is someone else points the domain to a different web site.  The account itself is safe.  The attacker doesn’t even know the account name. allows for two free domains before they start to charge a nominal fee.  The domains are yours and other users can’t create subdomains off of them.

The update URL for contains the user name and password for the account.  Anyone listening to the traffic on the account can compromise the entire account.

Both services update the DNS record quickly after a change IP request is sent.

Both services have somewhat dated web pages.  The slight edge goes to just because of how simple it is.

After all of that, it looks like is the winner.  I don’t like my password existing in clear text anywhere; however, the traffic can be sent via SSL to protect it from simple traffic sniffing attacks.

Allowing other users create subdomains off of one of my domains does not appeal to me at all.  That is the only issue that disqualifies  It is otherwise a great service.

If there is something that you would like me to try out, or if there is another service that I missed, please drop me a comment.

Tags: , ,

This blog has moved.  The URL is the same, but its location in the world has changed.  It’s now hosted at a 3rd party site instead of being at my residence.   Here’s why.

I am not a business:

Carries argue that you are a business if you require a static IP address.   In fact, static IP addresses are not available in most carriers’ standard plans.  They assume that people who subscribe to their service are content consumers and not content containers.  Businesses, on the other hand, are assumed to be content containers and are permitted to have a static IP address.

This wouldn’t be an issue if there wasn’t such a dramatic price differential.   Plans that contain static IP addresses are two times as much as their counterparts.  Why?  In my case, this makes no sense.  There is no profit motive behind what I would do with the IP.  Why am I considered a business?

A good solution, from a consumer standpoint, would be to separate users into different classes.  There are plenty of customers who do need the firewalling and don’t mind the dynamic IP that basic plans provide.  However, these features are just a nuisance to advanced users.  I would gladly pay $10 a month for a static IP.  Make it an option to add to the  plan.

Internet companies are potentially loosing money because they are not providing the services people want.  A $40 basic plan vs a $80 business plan is a no-brainier, but if there was a $60 option in there…..

I am not a hosting company:

There are things that I can do better than the hosting company and things that I do poorly.  Daily SQL backups, running in a dedicated Xen VM, chrooting the Apache server, as much processor as I can use, and the availability of any piece of _free_ software I want to install, are all benefits of having a server at home.  The technical word for it is a playground.  I can do anything I want or am able to do (which is ~anything).

The hosted world provides better uptime, better speed, and manages the system and network administration.  The best part about hosting is the cost.  It’s $7 a month for me to host this and as many other sites that I’d like to build.

I am not average:

Giving up the network administration and the system administration was a tough decision for me.  It has been fun.  Everyone running DD-WRT using VLANS, custom firewall rules, and OpenVPN understands.  Likewise, everyone running XEN on a VLANed host, with more customer firewall rules, and mod_security understands.

But why:

It was fun to host, but did it amount to anything?  The skills I picked up aren’t ones that I use on a daily basis anymore.  I haven’t risen to celebrity status, or really had that many visits (this is more of a content issue).  It was a good amount of fun while it lasted, now I’ve been there, done that, and I could do it again.  But why?

Tags: ,

Take 4

I post way too infrequently.   It seems like every 4th post is about how I had some sort of elaborate hardware failure.  So let me tell you about my most recent one.

Roughly a month ago, my NFS requests started failing.  This was odd.  The server was still happily running along, but, after further investigation, totally unresponsive.   I’m thinking this is bad, but I didn’t know exactly how bad.

After resetting the system, nothing happened.  Now I’m a lot worried.  Several resets later, I sat back and pondered the results.  About half the time, the system would get half way through posting.  Once, the system nearly booted, but have a disk error and locked.  Sporatic results like these point to a motherboard, CPU, or, most likely, a power supply.

I took the power supply out of my main desktop box and plugged it in.  The system would boot to a certain point every time, but still throw disk errors and refuse to fully boot.  This was a huge advancement.  Any issue that is reproduceable is explainable and solvable.

It was time to buy a new power supply.  It seems as though every time a component fails, I am able to buy something better and more advanced.  There is nothing that spawns learning quite like failure.  The power supply that I purchased was a Enermax Revolution 85+ ( Eight hundred and fifty watts!!!!!! ).  Enermax is my favorite power supply maker at this point.   This power supply had a few bonuses too.  It is exceptionally efficient, it is fully modular, it can power two dozen or so hard disks, and it had a $70 rebate.  I am totally pleased with the purchase.

The next step was to figure out the disk issue.  With hard disk issues, ears are an efficient trouble shooting tool.  Really?  Really.   If you hear a hard disk making sounds it doesn’t normally make, back up your data instantly.  This tip would’ve saved my bacon on many occasions.  I noticed the server making odd noises days before the failure and should have acted then.  After the new power supply was installed, it was totally apparent.  I had two failed disks.  The easiest way to see that a drive is failed is that it doesn’t show up when a system is booting.  During the power cycle, a system will check the disks that it has attached to it and, most of the time, display the specifications of the disk.   I could see that two disks that were properly plugged in and they were not detected; therefore, they were bad.  That, and I could hear that they weren’t spinning up properly.

Disk failure shouldn’t be an issue in servers.  I had RAID implemented on the disks.  RAID typically allows for a disk failure, that is, unless you use a type of RAID that doesn’t.  Because of space concerns I had when building out the box, I decided to use RAID level 0 on the disks.  RAID 0 will allow many disks to appear as one disk while combining the storage capacity of all of the disks.  Unfortunately, when one disk fails, all data is lost.  Only data that is ok to be lost should be put on an array where the disks are configured in this manner.

All data was not lost, however.  I did follow my own rule and only put data that could be lost on disk arrays that could not withstand failure.  The problem was that I considered my main OS to be something that was expendable.  The virtual machines, like the one that runs this site, were protected and recovered.  The problem with this setup is obvious.  When the main OS is down, the virtual machines will no longer be able to run because of their dependency on the main OS.  This was a classic mistake on my part, I should’ve put the OS in a safer place.  That won’t happen again.

The disks that I purchased to comprise the new storage core of the server are from the Western Digital Black family.  I really like these drive because they are built for performance and because they are cheap.  I purchased 3 of the 750GB model for $60 a piece.  I don’t know how reliable they will be until one of them fails.  The drives get good reviews so I’m not too worried about it.

Two disks and a power supply at the same time?  How on earth could that happen?  My current theory is that the power supply didn’t fail.  It degraded to the point where it couldn’t muster the power to get the entire system running from a cold start.  The system had to cold start when I received a disk failure on my main system array.  The second disk was part of my backup array that could survive a disk failure.  It is possible that the disk had been in a failed state for some time.

I have to give props to Zalman and Seagate.  Both companies stood by their product’s warranty and replaced the faulty products.  There was only 3 months left in a 3 year warranty on the Zalman power supply that failed.  The disk was an enterprise quality disk (but it failed so….), it had roughly 2 years left on the warranty.

Props also go to volume management and filesystem resizing utilities.  I used the CentOS 5.4 live CD as a recovery disk to transfer data from the disks after the operating system had failed.

Another year, another hardware failure.  This is why only professionals (like me) should host their own equipment.  Typically, people are better off letting a hosting company handle problems like this for them.

Tags: ,

Its been about 4 years since I first had a computer that was dedicated to running media on my television.  At the time, it was a rare thing to do.  The resolution on TVs wasn’t that good, streaming video wasn’t as mature as it is today, and storage was more expensive and less available then now.

The biggest inhibitor to switching to the computer as the primary input on the television has been the audience.  My wife is the primary customer when it comes to the television and if she doesn’t agree to what’s going on, it ain’t happening.

There are a lot of applications and considerations that made this finally a workable (and nearly ideal) solution.  Here’s the list of things that needed to happen.

1.  The media needed to be there

A computer can be used to get media from locations that a TV just can’t match.  Internet-based media is great because it is on demand by nature; which means that it can be watched when it is convenient.  With all major networks streaming their shows and Hulu emerging, Internet media is almost good enough to replace traditional TV viewing by itself.

To make it easy enough to replace a standard television, applications that put all of this media in one place, while using a consitant interface, is a must.  I use Boxee to fill this role.  Boxee is a gets rid of the need for having a keyboard and a mouse to control the computer with.  Also, it has an app to utilize Netflix’s streaming service.  With Boxee, I can watch Onion News with just a couple of clicks, and then switch over to listing to a Shoutcast radio station.  Internet media is a check.

Even with all of the Internet media out there.  HDTV is still a must.  This is the last form of media I that I got running through the computer.  The reason being is that it requires special equipment to get going.  I didn’t want to spend the $100 to buy a HDTV tuner card.   Until March of this year, I didn’t have a machine capable of running a HDTV tuner card anyway.  I finally caved and purchased the Elgato EyeTV Hybrid tuner.  It fills the role very well.  I can now watch HDTV on the computer.  As an added benefit, the included software (Eye TV) works as a PVR; meaning that live TV can be paused, rewound, and recorded.  I can set it to record shows that I would like to watch.  HDTV is a check.

I have a collection of pictures and videos that I keep on my home server.  These need to be able to  stream to the television.  I accomplished this with Boxee and a NFS share from my home server.  Boxee can connect to media across a network and display it.  My media is a check.

2. The hardware has to be there

Here’s a money making opportunity for someone.   Make a Mac Mini-sized machine with a HDTV tuner, N wireless in it, an OS that requires almost no maintenance, but can play anything thrown at it (Linux?), ability to display HD video without a glitch, and make it quiet.   There are a couple of possible options here.  For my own solution, I have a Mac Mini with components replaced in it to add a faster processor and N wireless.   There is a company that appears to be trying to solve this problem.  Visit and see if there is something that may work for you (it wouldn’t for my situation).   The EEE Box 206 may be a good solution for this.  There is no working solution that I know that comes right out of the box and works for this solution.

Game console manufactures are trying to get into this market.  The XBox360 does Netflix streaming now.  It may be a possible solution for some.  But again, not me.

N-wireless is a must for HD content.  I am running a WNHDE111.  It is a good solution because it runs over the less-crowded 5Ghz range.  In a place with a lot of other houses around, avoiding interference is key to smooth video playback.

3.  There must be a way to control it with one remote

A programmable remote is a must.  This is the part that most people will be turned off by.  I have 4 remotes: one for the TV, one for the home theater, one for the Mac Mini, and one for the Eye TV.  In a stroke of luck, I had purchased a programmable remote a couple of years back that works great for this.  It is the One for All  URC-9910B01.   All 4 devices are now programed in to the one controller.  This was a pain, if you decide to make the ultimate computer-to-connect-to-a-TV setup a good, programmable remote is a must.

I also have a wireless keyboard and a gyro mouse.  These sit under the couch for the most part.  If you want to browse the web on the TV-connected computer, they are a must.  I can’t expect most people would want to have a full-sized keyboard and a mouse under their couch.  The Logitech diNovo mini is an interesting move in the right direction, but it is horribly expensive and doesn’t to IR.  A perfect remote would have IR over RF to increase the range of the signal as well and not put line-of-sight restrictions on the user.  This is probably the most sub-optimal part of this configuration.

A good application launcher is needed because multiple applications are used in my setup.  Mira is the application I am using to accomplish this task.  With it, it is easy to launch applications if for some reason you’re stuck at the desktop without mouse.

4.  It must be cheap

Overall, I have spent somewhere around $600 in hardware costs.   I am notoriously thrifty though.   All of the purchases were done in a way where I didn’t have to pay retail price.  My monthly costs are just what I spend on Netflix, $8 a month.

Compare this to what you would pay and the features you would get from cable or dish services.  I see it as a compelling option.   Good luck with your setups.


For those who just want to grab the applications and go try out PictoMio and Picasa, and make sure you are running on a Microsoft platform.  Linux users are in the cold on this project.  Sure, there ways of doing this on Linux, but nothing as quick as the apps listed above.

The requirments for this project were as follows:

  1. Build a slideshow in less than an hour
  2. Incorporate the Ken Burns transition to keep the audience’s attention
  3. Incorporate video clips in between some of the slides
  4. Play audio during the presentation

When in this situation last year, I used PictoMio and found it a great application to use.   Transitions could be changed midstream, and timings could be altered on a per-picture basis.  The application was unstable at the time and it left a sour taste.  Slideshow applications cannot have instability.  When you’re the tech guy running the projector, the last thing you want is 100 people looking at you after a application crash.

When I was in this situation yesterday, I turned to Picasa.  The biggest reason for this is that I knew it would display slides without crashing.  Picasa packs a lot of features that I didn’t expect and ended up using often.

The two tools that helped the most where the automatic red-eye correction and contrast/color correction.  A lot of the photos benefited from these tools.  It added immensely to the overall quality of the show.

The slideshow aspect of Picasa is horribly limited.  The trick to getting the Ken Burns effect in Picasa is to use the movie maker.

There are a couple of limitations to the movie maker;  only one transition effect can be chosen and only one slide duration can be chosen, and songs will not loop for the duration of the photos.   To more than compensate for this, Picasa offers a few features that will enhance the slide show.

Text slides can be added in-between picture to convey information to the audience.  There’s nothing like a good setup for a funny picture.   Picasa does well with integrating video.  Putting video in the middle of a slideshow is simple to do and works pretty well.   There is, unfortunately, a 2 second delay after the video where the screen is black.  This appears to be the point where a transition would’ve been occuring.

I had to do the video in preview mode.  There wasn’t enough time to encode and run it as a video file.   In preview mode, the videos were choppy.  It probably ran at a 20 fps rate.  This didn’t ruin the show, but it dropped my perfection goal a touch.

It is the day after the show.  I wanted to replicate the work on my Linux workstation and create the DVD.  However, the Linux version of Picasa is disappionting with regards to video.  Making this a non-starter.  I have to hop on my soapbox and again proclaim that catch 22 that Linux is in;  user won’t use Linux due to lack of applications and funcationality, applications and functionality won’t come to Linux due to lack of users.  Of course, that is improving, but it’s sure is slow goings.

Overall, using Picasa to do the show exceeded my expectations.  Compliments from the audience abounded.  I am currently creating the video file to burn to a DVD to meet all of the requests I recieved for an encore.


The Palm Pre came out recently, and I had to get one.  Or two, depending on if you count the returned ones.

It took 5 hours and 8 visits to 3 different Sprint stores to come up with nothing.

The Pre has a lot of things going for it.  WebOS is excellent.  I never had an issue with the OS.  The problems that I had were to do with the hardware.  In particular, the screen.  Watching NFL Network on the phone was awesome.  The GPS and youtube apps were good too.

The first Pre had the discoloration issue that is discusses in length in other places on the internet.  Do a search for Palm Pre Discoloration and you will see what I mean.  The second Pre had a black spec in the middle of the screen.

The picture below shows it.  It is above the ‘C’ in the text “Premium Channels”.  Also, the lower screen discoloration issue is clearly visable.


Two tries, and two defective phones.  The rep at Sprint would not replace the second one, as they have a one return policy.

Speaking of Sprint.  I don’t think I’ve ever had representatives mislead me so much about anything like this before.  I was told various things such as:  “You have to have a repair center declare the phone defective before you can return the phone.”  “You can’t return the phone except at the store you purchased it from.”  “You can’t exchange the phone at this Sprint store.”  “We can’t put you on our list of people who want a Pre.”  “We won’t have another Pre in stock for 2 to 3 months.”

I may have well have been handing them radioactive material for the responses they were giving me.  The way that Sprint is handling this realease is unkind at best.  Please be aware of that if you decide to purchase this phone.

Tags: ,

The last piece of hardware that I expect a failure from is the motherboard.  There are no moving pieces, not much wear and tear.  Plus, I spent $250 on the last board.  It was an AW9D-MAX; top of the line when it was purchased.  I would expect it to last more than 2 1/2 years of off and on use.   Now that you already know what went wrong, I’ll lay out the troubleshooting that finally lead me to this conclusion.  Motherboard issues are extremely hard to diagnose.

My computer began power cycling itself for seemingly no reason about a month ago.   The times it would occur were inconsistent.  I could narrow it down to times when the system was under a lot of stress.   Kernel compilation combined with watching a flash video would take the system down within 2 minutes.  I also noticed that the crashes weren’t always the same; sometimes I would catch a glimpse of a kernel panic when on the console.

Instability is an awful thing.  It prompted me into action quickly.  The first things that were changed out were the processor and the video card.  They were recent purchases, changes I was going to do anyway.  The issue still persisted.

At this point, I went off the path.  Every kernel or BIOS feature that could cause instability was checked.  I went though 10 different kernel setups, flashing the BIOS, and resetting the BIOS to factory defaults.  In a final attempt to convice myself that this was not an OS issue, I reproduced the problem on a live CD.

Memory can go bad at times.  I proved this was not the case by two methods, switching out DIMMs and running memtest86+.  The problem wasn’t the memory modules.

There were only two options left.  The power supply and the motherboard.  At this point, all hope of a painless fix were lost.  It was time to spend some hard earned money.

I started by replacing the power supply.  I have had power supply issues before that had caused flakiness.  When a system is under load, a poor power supply (or one with insufficient wattage) will no longer be able to power the components of the system.  I decided that getting a modular, 80+ efficiency power supply would be worth it even if it wasn’t the issue.   I am now the proud owner of an Enermax EMD625AWT power supply.

The Enermax power supply is great.  The fan doesn’t spin up unless the power usage is high, so it stays nice and quiet.  After reading a bunch of reviews on it, I am totally convinced that I made the right call on purchasing it.  However, it was not the problem.

The problem was the motherboard.  It had to be, there was nothing else.  That story is for another blog post.

Tags: ,

Updating a computer is quite straight-forward.   Either type a command or click a couple of times,  wait, and reboot.  When the computer restarts, everything is back to the way is was.  Right?

Well, the truth is that this may not always be the case.   Case in point, my last Cent OS 5 update.  Without getting into the details, the end result was that a service that I relied on was not running after the reboot.  After checking the logs, I discovered that it was due to the service being locked down to running in a specific manner (the port mountd could run on was restricted).

Security patches are the reason for a majority of updates to a stable system.  The software has a security issue, the software gets patched, and then the patch is sent to the users.   The cycle happens offend, and, most of the time, without negative impact.

These is another kind of “patch”.  One that restricts what a process or a user can do.  These are important to do as they improve the overall security of the machine.  However, they can also be quite the nuisance.   Security updates in the restrictive sense cause a lot of pain.  Remember when Windows XP SP 2 came out and introduced Windows Firewall?  I do.  It was painful to adapt in some cases.  The access controls that are built into Vista are another example of this.   Again, security restrictions causing pain.

I don’t fault a software vendor for including major (user affecting and application affecting) updates when a new release goes gold.  Can anyone imagine how much more malware would be out there if Windows Firewall wasn’t on by default?  On the other hand, users and application developers prefer that their applications run correctly all the time.

A perfect OS would never need additional user or application affecting restrictions placed upon it.   It would be set and forget.  Security updates would still need to be applied, but everything that ran before the update will run after the update.   OSs on the market today are getting closer to this goal.

I’m pondering this because these are the decisions I like to make.  Do we release an update that will break users’ applications (causing them grief), or do we allow a system to run insecurely (which could cause an number of issues)?  This question comes up in the technology world _very_ frequently.  Typically, there are costs associated with both choices and a decision is made.

What is the right answer.  Well, it’s mu, of course!  And that’s why technology is challenging and fun at the same time.


This is something I’ve wanted to do for a while now.  I finally found put the time to it and found a really good tutorial.  For people wanting to do the same, check out


« Older entries