Tech

You are currently browsing the archive for the Tech category.

Linux has been working hard to make things work out of the box.  But, what happens when things are working?  Well, when things are working we want them to keep working.   That is why a linux distribution that is supported for a long length of time makes sense.   Oracle has taken a huge step in this regard by making updates for Oracle Linux free.

You have probably heard the story before.  Oracle Linux stems from the infamous RedHat Linux distribution.  RedHat does distributes updated packages for their distribution, but doesn’t give them in an immediately usable form.   Other distributions have taken the packages RedHat makes and transforms them into a usable form.

CentOS was the most popular edition of these RedHat ‘clones’.   When people got turned about the frequency of updates of CentoOS, they turned to Scientific Linux.  Now that Oracle Linux provides free updates, there is another option.

Why is this a big deal?  It’s a big deal because no one likes to change their operating system every couple of years.  The last few updates to the linux desktop have really created backlash in the community. The operating system should not get in the way or make the user have to relearn how to get work done.  The easiest way to accomplish this is to not have perform a major update.  These “enterprise” distributions that I have listed have support for 10 years!  That’s longer than most hardware will last.

Oracle Linux is supported by Oracle.  Given the fact that the company sells a competing product, this is quite an interesting situation.  The fact that it is supported by a company or Oracle’s stability gives it a lot of credibility.   It also has features that are not in the other clones.

RedHat provides some great services to the Linux community.  Please don’t feel that I’m unhappy with them overall.  With regards to providing free and usable packages for an enterprise version of linux, Oracle is currently doing a better job.

I am convinced that Oracle Linux is the way to go for an operating system that provides enterprise stability.   I would also say thanks to Oracle for providing free updates and helping fill the gap that RedHat took away all those years ago when they stopped providing free binary updates.

RedHat Enterprise Linux 6 doesn’t include Xen support out of the box.  Fedora and Ubuntu also feature KVM.  For a non-business user like myself, it was time to follow suit.  It was 8 hours worth of work and 26 virt-install attempts later, that I finally found the magic combo.

When reading this note that the machine that was used as a host is running CentOS 5.  There seem to be some changes between it and the packages that would come with a system based on RedHat Enterprise Linux 6.

Note: After writing this, it seems as though the greatest benefit would come from reading this completely, reading other resources, and then starting your migration.

Step 0: Copy

This step was easy.  My current virtual machines sit on a LVM partition.  After shutting down the virtual machines.  Running the dd command created a backup easily.

dd if=/dev/vms/webRoot of=/backups/webRoot.img

Step 1: Migration

Things went bad shortly thereafter. The initial plan was to run virt-v2v and simply migrate the a CentOS 4 virtual machine that was running as a web server. But, alas, I had committed a great configuration “error” that I wasn’t even aware of. Instead of having the LVM block devices represent an entire virtual disk, I had them configured to be just a partition. virt-v2v was unable to read the partition. At least, that was the last error that was given. I found virt-v2v to be unforgiving. The error messages that it produced were cryptic. I had to look at the perl scripts to actually see what it was failing on. After taking into acount that CentOS 4 was only going to be supported for another few months and the fact that I really wanted to install Scientific Linux for something, I did something that is usually not in my vocabulary….I gave up. The migration step was a failure. I’ve seen sites where people made it work (they were not using block devices as partitions as I was), it just didn’t work for me.

Step 2: Creating a new VM from an install disk

This isn’t hard to do, the part where this fell apart was that I was trying to do this in headless mode. The entire install was to be done from a console. Again, I’ve seen posts where people have this working. The man page for virt-install gives us a hint as to how to do this

–nographics
No graphical console will be allocated for the guest. Fully
virtualized guests (Xen FV or QEmu/KVM) will need to have a text
console configured on the first serial port in the guest (this can be
done via the –extra-args option). Xen PV will set this up
automatically. The command virsh console NAME can be used to
connect to the serial device.

It says we need to create a console to connect to. No problem. In fact, here is a sample command that _should_ work.

virt-install -v --connect qemu:///system -n vm24 -r 512 --vcpus=2 --disk="path=/var/lib/libvirt/images/vm24.qcow2,size=25" --location /var/lib/libvirt/images/SL-61-x86_64-2011-07-27-Install-DVD --os-type linux --os-variant rhel6 --accelerate --network=bridge:br0 --prompt --extra-args="text console=tty0 console=ttyS0,115200" --nographics

It almost does work, but something has happened to my drives!

This ended my first evening of work on the project.  It was an absolute failure.  The only work that was done was to discover out what didn’t work. I again gave up trying to do the install from the console.

Day two was much better.  Most example you will see on the internet for virt-install use vnc to provide a virtual monitor’s view into the installation.  This is the approach I now recommend.

If you were trying to do this completely headless from a console, good luck to you :). For everyone else, break down and run the following:

yum install virt-viewer xauth

Do not forget xauth! Without it ssh -X to your sever will not work! You will look for a solution and will find that people recommend running ssh -Y instead. The real problem is that you are missing xauth. (Yes, this did cost me an hour.)

Now run virt-install and leave off the –nographics and console part


virt-install -v --connect qemu:///system -n vm26 -r 1024 --vcpus=2 --disk="path=/var/lib/libvirt/images/vm26.qcow2,size=25" --cdrom /var/lib/libvirt/images/SL-61-x86_64-2011-07-27-Install-DVD.iso --os-type linux --accelerate --network=bridge:br0 --prompt --vnc

All of the drives are accessible. Hmmmm. I’m guessing that it works because of… well, I don’t know. Perhaps is has something to do with the cdrom drive being defined differently. It doesn’t make sense to further peruse this when the vnc method works just fine.

Step 3: OS install

This is done as normal. There were no installation modifications.

Step 4: Enable a console (not required)

VNC and virt-viewer work just find. But, I was used to running virsh console to get a console on machine that is able to see the machine’s startup output. The method for this varies by OS. For Scientific Linux 6, I this to the end of the kernel parameter in /etc/grub.conf

text console=tty0 console=ttyS0,115200

In debian squeeze, this line was modified in /etc/default/grub

GRUB_CMDLINE_LINUX_DEFAULT="quiet text console=tty0,115200"

After running update-grub, the console worked.  Day (or should I type “night”) 2 was over and I had a working web server.

A note about transient domains

Are you getting this error:

“internal error cannot set autostart for transient domain”

When a domain is created just by running virsh create ${domain}.xml the domain is considered transient. The biggest issue with this is that it cannot be set to autostart with virsh autostart ${domainname}. Ensure that if you would like a domain to autostart, and you already have the xml file, run these commands
virsh define ${domain}.xml
virsh autostart ${domainname}

Then, the domain will be autostarted on boot. Run virsh start ${domainname} to start it up manually. This was somewhat talked about in the official documentation, but the actual commands that make it happen are not listed.

A Note About Paravirtualization

The virtual network and disk can be paravirtualized.    This will result in increased performance.  I’m actually writing this so I know how this can be done in the future.  This is covered extensively in other places.

Here are the guides to help make this happen ( guide1, guide2).

Additional Resources

This is not a complete resource. I haven’t even mentioned firewalling or bridging. These topics are covered in other places on the web pretty throughly.

A great guide to get you started can be found here.
You will most likely have to play with the VM’s XML.  The details are here.
Another conversion document that I found helpful can be found here.
Debian images that can be used with KVM can be found here.

Tags: ,

My OLPC XO-1 is staring at me.   It is appealing to my creativity.  It wants to be utilized. The problem is that I just can’t find a use for it.

This is the second one that I had purchased.  I bought one way back in the days of the original give one get one campaign.  That was before everyone was introduced to the netbook craze.  It served me well while I owned it.  It re-introduced me to the original Sim City 🙂 .  But, I had a laptop so why did I need this other device that didn’t have 1/3 of the power my laptop did?  I sold it about a year after purchase.

The hardware is still the best cost to benefit ratio that exists.  There are no other devices that have the same level of versatility and build quality.  That’s why I’m so confounded by the fact that after selling my original XO-1, and buying another a year later, that I still haven’t found a good use for the thing.  At this point, all I’ve done is give someone a free laptop and make several small donations to eBay.

The intent of the re-purchase was to provide a bastion host to my home network.  The USB ports would provide ethernet (with an adapter), and a USB hub with some USB to serial adapters could transform it into a cheap console server.  The low power usage of the device would mean that running it 24/7 wouldn’t cost much.  The great battery life makes it act as though it has a built-in UPS.   It was perfect for that particular task. That was the plan, anyway.

The reality is that having a bastion host on my home network is overkill to the power of 17.  At the moment, my personal time allocation prohibits me from taking on tasks that are larger than overkill to the 15th.  It didn’t take me long to see that if I was going to alter my home network to add a bastion host, the path to the Internet for every other device would have to change as well.   It just didn’t seem worth the cost of admission.   This is doubly true when I already have a sweet router setup running OpenWRT (a post on that setup is in the works).

The latest builds of software for the XO-1 are great.  The best change is that the software is built from the Fedora base (currently, Fedora 14).  That doesn’t mean much if all you want to do is install the base set of software.  However, if, say, you wanted to install a Zabbix agent for monitoring or OpenVPN to turn it into a VPN server, you can.  The possibilities are quite large.

But, it is still sitting here; just staring at me.  Perhaps it would be put to good use as an education tool for someone not fortunate enough to own a computer…

Tags:

Alas, poor me. Given an IP address that may modify itself over time. A web address would have to be altered time and time again in order to point to my home server. Thankfully, there are sites that will host a domain name and accept updates when they are notified of IP address changes. This is commonly referred to as Dynamic DNS

There are dozens of Dynamic DNS services on the Internet. To pick the right one, I gathered some requirements of my own:

  • It must be supported by OpenWRT’s DNS scripts  (dyndns.org, changeip.com, zoneedit.com, no-ip.com, freedns.afraid.org)
  • It must be free to use (changeip is only commercial, so it is out)
  • It must be able to use domain that I have already registered instead of using their own domain names(down to zoneedit.com and freedns.afraid.org)

Right now, I am trying out both zoneedit.com and freedns.afraid.org.  Here are some pros and cons with each service.  Some of these detail were unexpected.  Hopefully, this will help others who are facing similar problems.

When a domain is put on freedns.afraid.org, other registered members of the service are free to create subdomains off of your domain.  This is nice for people who are doing the subdomains, but horrible if you are trying to keep any type of brand consistency for your domain.  Someone can take anything.yourdomain.com and put whatever they want there.  In order to hide the domain from other users, a fee of $5 monthly must be paid.

The part about freedns.afraid.org that came as a present surprise is the update URL.  It doesn’t contain the account’s password.  Instead, it has a unique key in it.  That way, if the key to perform an update is compromised, the worst that can happen is someone else points the domain to a different web site.  The account itself is safe.  The attacker doesn’t even know the account name.

Zoneedit.com allows for two free domains before they start to charge a nominal fee.  The domains are yours and other users can’t create subdomains off of them.

The update URL for zoneedit.com contains the user name and password for the account.  Anyone listening to the traffic on the account can compromise the entire account.

Both services update the DNS record quickly after a change IP request is sent.

Both services have somewhat dated web pages.  The slight edge goes to freedns.afraid.org just because of how simple it is.

After all of that, it looks like zoneedit.com is the winner.  I don’t like my password existing in clear text anywhere; however, the traffic can be sent via SSL to protect it from simple traffic sniffing attacks.

Allowing other users create subdomains off of one of my domains does not appeal to me at all.  That is the only issue that disqualifies freedns.afraid.org.  It is otherwise a great service.

If there is something that you would like me to try out, or if there is another service that I missed, please drop me a comment.

Tags: , ,

This blog has moved.  The URL is the same, but its location in the world has changed.  It’s now hosted at a 3rd party site instead of being at my residence.   Here’s why.

I am not a business:

Carries argue that you are a business if you require a static IP address.   In fact, static IP addresses are not available in most carriers’ standard plans.  They assume that people who subscribe to their service are content consumers and not content containers.  Businesses, on the other hand, are assumed to be content containers and are permitted to have a static IP address.

This wouldn’t be an issue if there wasn’t such a dramatic price differential.   Plans that contain static IP addresses are two times as much as their counterparts.  Why?  In my case, this makes no sense.  There is no profit motive behind what I would do with the IP.  Why am I considered a business?

A good solution, from a consumer standpoint, would be to separate users into different classes.  There are plenty of customers who do need the firewalling and don’t mind the dynamic IP that basic plans provide.  However, these features are just a nuisance to advanced users.  I would gladly pay $10 a month for a static IP.  Make it an option to add to the  plan.

Internet companies are potentially loosing money because they are not providing the services people want.  A $40 basic plan vs a $80 business plan is a no-brainier, but if there was a $60 option in there…..

I am not a hosting company:

There are things that I can do better than the hosting company and things that I do poorly.  Daily SQL backups, running in a dedicated Xen VM, chrooting the Apache server, as much processor as I can use, and the availability of any piece of _free_ software I want to install, are all benefits of having a server at home.  The technical word for it is a playground.  I can do anything I want or am able to do (which is ~anything).

The hosted world provides better uptime, better speed, and manages the system and network administration.  The best part about hosting is the cost.  It’s $7 a month for me to host this and as many other sites that I’d like to build.

I am not average:

Giving up the network administration and the system administration was a tough decision for me.  It has been fun.  Everyone running DD-WRT using VLANS, custom firewall rules, and OpenVPN understands.  Likewise, everyone running XEN on a VLANed host, with more customer firewall rules, and mod_security understands.

But why:

It was fun to host, but did it amount to anything?  The skills I picked up aren’t ones that I use on a daily basis anymore.  I haven’t risen to celebrity status, or really had that many visits (this is more of a content issue).  It was a good amount of fun while it lasted, now I’ve been there, done that, and I could do it again.  But why?

Tags: ,

Take 4

I post way too infrequently.   It seems like every 4th post is about how I had some sort of elaborate hardware failure.  So let me tell you about my most recent one.

Roughly a month ago, my NFS requests started failing.  This was odd.  The server was still happily running along, but, after further investigation, totally unresponsive.   I’m thinking this is bad, but I didn’t know exactly how bad.

After resetting the system, nothing happened.  Now I’m a lot worried.  Several resets later, I sat back and pondered the results.  About half the time, the system would get half way through posting.  Once, the system nearly booted, but have a disk error and locked.  Sporatic results like these point to a motherboard, CPU, or, most likely, a power supply.

I took the power supply out of my main desktop box and plugged it in.  The system would boot to a certain point every time, but still throw disk errors and refuse to fully boot.  This was a huge advancement.  Any issue that is reproduceable is explainable and solvable.

It was time to buy a new power supply.  It seems as though every time a component fails, I am able to buy something better and more advanced.  There is nothing that spawns learning quite like failure.  The power supply that I purchased was a Enermax Revolution 85+ ( Eight hundred and fifty watts!!!!!! ).  Enermax is my favorite power supply maker at this point.   This power supply had a few bonuses too.  It is exceptionally efficient, it is fully modular, it can power two dozen or so hard disks, and it had a $70 rebate.  I am totally pleased with the purchase.

The next step was to figure out the disk issue.  With hard disk issues, ears are an efficient trouble shooting tool.  Really?  Really.   If you hear a hard disk making sounds it doesn’t normally make, back up your data instantly.  This tip would’ve saved my bacon on many occasions.  I noticed the server making odd noises days before the failure and should have acted then.  After the new power supply was installed, it was totally apparent.  I had two failed disks.  The easiest way to see that a drive is failed is that it doesn’t show up when a system is booting.  During the power cycle, a system will check the disks that it has attached to it and, most of the time, display the specifications of the disk.   I could see that two disks that were properly plugged in and they were not detected; therefore, they were bad.  That, and I could hear that they weren’t spinning up properly.

Disk failure shouldn’t be an issue in servers.  I had RAID implemented on the disks.  RAID typically allows for a disk failure, that is, unless you use a type of RAID that doesn’t.  Because of space concerns I had when building out the box, I decided to use RAID level 0 on the disks.  RAID 0 will allow many disks to appear as one disk while combining the storage capacity of all of the disks.  Unfortunately, when one disk fails, all data is lost.  Only data that is ok to be lost should be put on an array where the disks are configured in this manner.

All data was not lost, however.  I did follow my own rule and only put data that could be lost on disk arrays that could not withstand failure.  The problem was that I considered my main OS to be something that was expendable.  The virtual machines, like the one that runs this site, were protected and recovered.  The problem with this setup is obvious.  When the main OS is down, the virtual machines will no longer be able to run because of their dependency on the main OS.  This was a classic mistake on my part, I should’ve put the OS in a safer place.  That won’t happen again.

The disks that I purchased to comprise the new storage core of the server are from the Western Digital Black family.  I really like these drive because they are built for performance and because they are cheap.  I purchased 3 of the 750GB model for $60 a piece.  I don’t know how reliable they will be until one of them fails.  The drives get good reviews so I’m not too worried about it.

Two disks and a power supply at the same time?  How on earth could that happen?  My current theory is that the power supply didn’t fail.  It degraded to the point where it couldn’t muster the power to get the entire system running from a cold start.  The system had to cold start when I received a disk failure on my main system array.  The second disk was part of my backup array that could survive a disk failure.  It is possible that the disk had been in a failed state for some time.

I have to give props to Zalman and Seagate.  Both companies stood by their product’s warranty and replaced the faulty products.  There was only 3 months left in a 3 year warranty on the Zalman power supply that failed.  The disk was an enterprise quality disk (but it failed so….), it had roughly 2 years left on the warranty.

Props also go to volume management and filesystem resizing utilities.  I used the CentOS 5.4 live CD as a recovery disk to transfer data from the disks after the operating system had failed.

Another year, another hardware failure.  This is why only professionals (like me) should host their own equipment.  Typically, people are better off letting a hosting company handle problems like this for them.

Tags: ,

Its been about 4 years since I first had a computer that was dedicated to running media on my television.  At the time, it was a rare thing to do.  The resolution on TVs wasn’t that good, streaming video wasn’t as mature as it is today, and storage was more expensive and less available then now.

The biggest inhibitor to switching to the computer as the primary input on the television has been the audience.  My wife is the primary customer when it comes to the television and if she doesn’t agree to what’s going on, it ain’t happening.

There are a lot of applications and considerations that made this finally a workable (and nearly ideal) solution.  Here’s the list of things that needed to happen.

1.  The media needed to be there

A computer can be used to get media from locations that a TV just can’t match.  Internet-based media is great because it is on demand by nature; which means that it can be watched when it is convenient.  With all major networks streaming their shows and Hulu emerging, Internet media is almost good enough to replace traditional TV viewing by itself.

To make it easy enough to replace a standard television, applications that put all of this media in one place, while using a consitant interface, is a must.  I use Boxee to fill this role.  Boxee is a gets rid of the need for having a keyboard and a mouse to control the computer with.  Also, it has an app to utilize Netflix’s streaming service.  With Boxee, I can watch Onion News with just a couple of clicks, and then switch over to listing to a Shoutcast radio station.  Internet media is a check.

Even with all of the Internet media out there.  HDTV is still a must.  This is the last form of media I that I got running through the computer.  The reason being is that it requires special equipment to get going.  I didn’t want to spend the $100 to buy a HDTV tuner card.   Until March of this year, I didn’t have a machine capable of running a HDTV tuner card anyway.  I finally caved and purchased the Elgato EyeTV Hybrid tuner.  It fills the role very well.  I can now watch HDTV on the computer.  As an added benefit, the included software (Eye TV) works as a PVR; meaning that live TV can be paused, rewound, and recorded.  I can set it to record shows that I would like to watch.  HDTV is a check.

I have a collection of pictures and videos that I keep on my home server.  These need to be able to  stream to the television.  I accomplished this with Boxee and a NFS share from my home server.  Boxee can connect to media across a network and display it.  My media is a check.

2. The hardware has to be there

Here’s a money making opportunity for someone.   Make a Mac Mini-sized machine with a HDTV tuner, N wireless in it, an OS that requires almost no maintenance, but can play anything thrown at it (Linux?), ability to display HD video without a glitch, and make it quiet.   There are a couple of possible options here.  For my own solution, I have a Mac Mini with components replaced in it to add a faster processor and N wireless.   There is a company that appears to be trying to solve this problem.  Visit http://www.neurostechnology.com/ and see if there is something that may work for you (it wouldn’t for my situation).   The EEE Box 206 may be a good solution for this.  There is no working solution that I know that comes right out of the box and works for this solution.

Game console manufactures are trying to get into this market.  The XBox360 does Netflix streaming now.  It may be a possible solution for some.  But again, not me.

N-wireless is a must for HD content.  I am running a WNHDE111.  It is a good solution because it runs over the less-crowded 5Ghz range.  In a place with a lot of other houses around, avoiding interference is key to smooth video playback.

3.  There must be a way to control it with one remote

A programmable remote is a must.  This is the part that most people will be turned off by.  I have 4 remotes: one for the TV, one for the home theater, one for the Mac Mini, and one for the Eye TV.  In a stroke of luck, I had purchased a programmable remote a couple of years back that works great for this.  It is the One for All  URC-9910B01.   All 4 devices are now programed in to the one controller.  This was a pain, if you decide to make the ultimate computer-to-connect-to-a-TV setup a good, programmable remote is a must.

I also have a wireless keyboard and a gyro mouse.  These sit under the couch for the most part.  If you want to browse the web on the TV-connected computer, they are a must.  I can’t expect most people would want to have a full-sized keyboard and a mouse under their couch.  The Logitech diNovo mini is an interesting move in the right direction, but it is horribly expensive and doesn’t to IR.  A perfect remote would have IR over RF to increase the range of the signal as well and not put line-of-sight restrictions on the user.  This is probably the most sub-optimal part of this configuration.

A good application launcher is needed because multiple applications are used in my setup.  Mira is the application I am using to accomplish this task.  With it, it is easy to launch applications if for some reason you’re stuck at the desktop without mouse.

4.  It must be cheap

Overall, I have spent somewhere around $600 in hardware costs.   I am notoriously thrifty though.   All of the purchases were done in a way where I didn’t have to pay retail price.  My monthly costs are just what I spend on Netflix, $8 a month.

Compare this to what you would pay and the features you would get from cable or dish services.  I see it as a compelling option.   Good luck with your setups.

Tags:

For those who just want to grab the applications and go try out PictoMio and Picasa, and make sure you are running on a Microsoft platform.  Linux users are in the cold on this project.  Sure, there ways of doing this on Linux, but nothing as quick as the apps listed above.

The requirments for this project were as follows:

  1. Build a slideshow in less than an hour
  2. Incorporate the Ken Burns transition to keep the audience’s attention
  3. Incorporate video clips in between some of the slides
  4. Play audio during the presentation

When in this situation last year, I used PictoMio and found it a great application to use.   Transitions could be changed midstream, and timings could be altered on a per-picture basis.  The application was unstable at the time and it left a sour taste.  Slideshow applications cannot have instability.  When you’re the tech guy running the projector, the last thing you want is 100 people looking at you after a application crash.

When I was in this situation yesterday, I turned to Picasa.  The biggest reason for this is that I knew it would display slides without crashing.  Picasa packs a lot of features that I didn’t expect and ended up using often.

The two tools that helped the most where the automatic red-eye correction and contrast/color correction.  A lot of the photos benefited from these tools.  It added immensely to the overall quality of the show.

The slideshow aspect of Picasa is horribly limited.  The trick to getting the Ken Burns effect in Picasa is to use the movie maker.

There are a couple of limitations to the movie maker;  only one transition effect can be chosen and only one slide duration can be chosen, and songs will not loop for the duration of the photos.   To more than compensate for this, Picasa offers a few features that will enhance the slide show.

Text slides can be added in-between picture to convey information to the audience.  There’s nothing like a good setup for a funny picture.   Picasa does well with integrating video.  Putting video in the middle of a slideshow is simple to do and works pretty well.   There is, unfortunately, a 2 second delay after the video where the screen is black.  This appears to be the point where a transition would’ve been occuring.

I had to do the video in preview mode.  There wasn’t enough time to encode and run it as a video file.   In preview mode, the videos were choppy.  It probably ran at a 20 fps rate.  This didn’t ruin the show, but it dropped my perfection goal a touch.

It is the day after the show.  I wanted to replicate the work on my Linux workstation and create the DVD.  However, the Linux version of Picasa is disappionting with regards to video.  Making this a non-starter.  I have to hop on my soapbox and again proclaim that catch 22 that Linux is in;  user won’t use Linux due to lack of applications and funcationality, applications and functionality won’t come to Linux due to lack of users.  Of course, that is improving, but it’s sure is slow goings.

Overall, using Picasa to do the show exceeded my expectations.  Compliments from the audience abounded.  I am currently creating the video file to burn to a DVD to meet all of the requests I recieved for an encore.

Tags:

The Palm Pre came out recently, and I had to get one.  Or two, depending on if you count the returned ones.

It took 5 hours and 8 visits to 3 different Sprint stores to come up with nothing.

The Pre has a lot of things going for it.  WebOS is excellent.  I never had an issue with the OS.  The problems that I had were to do with the hardware.  In particular, the screen.  Watching NFL Network on the phone was awesome.  The GPS and youtube apps were good too.

The first Pre had the discoloration issue that is discusses in length in other places on the internet.  Do a search for Palm Pre Discoloration and you will see what I mean.  The second Pre had a black spec in the middle of the screen.

The picture below shows it.  It is above the ‘C’ in the text “Premium Channels”.  Also, the lower screen discoloration issue is clearly visable.

img_0376-small

Two tries, and two defective phones.  The rep at Sprint would not replace the second one, as they have a one return policy.

Speaking of Sprint.  I don’t think I’ve ever had representatives mislead me so much about anything like this before.  I was told various things such as:  “You have to have a repair center declare the phone defective before you can return the phone.”  “You can’t return the phone except at the store you purchased it from.”  “You can’t exchange the phone at this Sprint store.”  “We can’t put you on our list of people who want a Pre.”  “We won’t have another Pre in stock for 2 to 3 months.”

I may have well have been handing them radioactive material for the responses they were giving me.  The way that Sprint is handling this realease is unkind at best.  Please be aware of that if you decide to purchase this phone.

Tags: ,

The last piece of hardware that I expect a failure from is the motherboard.  There are no moving pieces, not much wear and tear.  Plus, I spent $250 on the last board.  It was an AW9D-MAX; top of the line when it was purchased.  I would expect it to last more than 2 1/2 years of off and on use.   Now that you already know what went wrong, I’ll lay out the troubleshooting that finally lead me to this conclusion.  Motherboard issues are extremely hard to diagnose.

My computer began power cycling itself for seemingly no reason about a month ago.   The times it would occur were inconsistent.  I could narrow it down to times when the system was under a lot of stress.   Kernel compilation combined with watching a flash video would take the system down within 2 minutes.  I also noticed that the crashes weren’t always the same; sometimes I would catch a glimpse of a kernel panic when on the console.

Instability is an awful thing.  It prompted me into action quickly.  The first things that were changed out were the processor and the video card.  They were recent purchases, changes I was going to do anyway.  The issue still persisted.

At this point, I went off the path.  Every kernel or BIOS feature that could cause instability was checked.  I went though 10 different kernel setups, flashing the BIOS, and resetting the BIOS to factory defaults.  In a final attempt to convice myself that this was not an OS issue, I reproduced the problem on a live CD.

Memory can go bad at times.  I proved this was not the case by two methods, switching out DIMMs and running memtest86+.  The problem wasn’t the memory modules.

There were only two options left.  The power supply and the motherboard.  At this point, all hope of a painless fix were lost.  It was time to spend some hard earned money.

I started by replacing the power supply.  I have had power supply issues before that had caused flakiness.  When a system is under load, a poor power supply (or one with insufficient wattage) will no longer be able to power the components of the system.  I decided that getting a modular, 80+ efficiency power supply would be worth it even if it wasn’t the issue.   I am now the proud owner of an Enermax EMD625AWT power supply.

The Enermax power supply is great.  The fan doesn’t spin up unless the power usage is high, so it stays nice and quiet.  After reading a bunch of reviews on it, I am totally convinced that I made the right call on purchasing it.  However, it was not the problem.

The problem was the motherboard.  It had to be, there was nothing else.  That story is for another blog post.

Tags: ,

« Older entries § Newer entries »