RedHat to CentOS Server Conversion

RedHat Enterprise Linux is a great OS, but it does have some expense associated with it in terms of update entitlements. In some cases, it may be acceptable to use CentOS Linux instead. From the CentOS website:

The CentOS Linux distribution is a stable, predictable, manageable and reproducible platform derived from the sources of Red Hat Enterprise Linux (RHEL). Since March 2004, CentOS Linux has been a community-supported distribution derived from sources freely provided to the public by Red Hat. As such, CentOS Linux aims to be functionally compatible with RHEL. We mainly change packages to remove upstream vendor branding and artwork. CentOS Linux is no-cost and free to redistribute.

There are many articles which cover, in lesser or greater detail, the process of converting a running server from RedHat to CentOS. The process described here was tested and was found to work for the particular servers I work with. YMMV, as they used to say on UseNet…

 

Prepare the Server and Download CentOS Packages

Login to the server and become root. Ideally, you should do a yum upgrade as a first step to get all packages up to the latest versions. This may not be possible in some circumstances, the steps below have been tested on a server where the outstanding updates were not applied first and there didn’t seem to be any adverse effect. Nonetheless, patching your servers is a Good Thing™ and is to be encouraged.
 

Create a temporary working area

# mkdir -p /home/temp/centos
# cd /home/temp/centos

Verify your version of RHEL, these instructions apply only to v6.

# cat /etc/redhat-release

Determine your architecture (32-bit = i386/i686, 64-bit = x86_64)

# uname -i

Download (wget) the applicable package files for your release and architecture.

The version numbers on these packages will almost certainly have changed by the time you read this. Have a browse of the CentOS mirror site to find the current versions:

32 bit systems: http://mirror.centos.org/centos/6/os/i386/Packages/
64 bit systems: http://mirror.centos.org/centos/6/os/x86_64/Packages/

Replace the ‘x’ values below with the current version numbers
 

CentOS 6.5 / 32-bit packages required

RPM-GPG-KEY-CentOS-6
centos-release-6-x.el6.centos.x.x.i686.rpm
centos-indexhtml-6-x.el6.centos.noarch.rpm
yum-x.x.x-x.el6.centos.noarch.rpm
yum-plugin-fastestmirror-x.x.x-x.el6.noarch.rpm

CentOS 6.5 / 64-bit packages required

RPM-GPG-KEY-CentOS-6
centos-release-6-x.el6.centos.xx.x.x86_64.rpm
centos-indexhtml-6-x.el6.centos.noarch.rpm
yum-x.x.xx-xx.el6.centos.noarch.rpm
yum-plugin-fastestmirror-x.x.xx-xx.el6.noarch.rpm

Initial Conversion

Clean up yum’s cache

# yum clean all

Import the GPG key for the appropriate version of CentOS

# rpm --import RPM-GPG-KEY-CentOS-6

Remove Core RHEL packages

# yum remove rhnlib abrt-plugin-bugzilla redhat-release-notes*
# rpm -e --nodeps redhat-release-server-6Server redhat-indexhtml

The ‘rpm -e’ command might fail saying one of the packages is not installed. If so, just remove that package from the command and run it again.
 

Remove RHEL subscription-manager

# subscription-manager clean
# yum remove subscription-manager

This stops yum from throwing up an error message every time you run it, something like “This system is not registered to Red Hat Subscription Management.”
 

Force install the core CentOS RPMs you downloaded

# rpm -Uvh --force *.rpm

 

Clean up yum and then upgrade

# yum clean all
# yum upgrade

I rebooted the machine at this point, just to assure myself that it would come back up properly. At the same time, I noticed that the descriptions in /boot/grub/grub.conf still mentioned RedHat – a quick edit soon sorted that out, replacing all instances of RedHat with CentOS.
 

Purify Remaining Packages

Once the server is back up and running, a first pass at removing the remaining RHEL packages can be undertaken:

# yum clean all
# yum reinstall $(rpm -qa --qf "%{NAME} %{VENDOR} \n" | grep "Red Hat" | cut –d" " -f1)
# yum remove Red_Hat_Enterprise_Linux-Release_Notes-6-en-US

The reinstall command above will cleanly replace a large portion of the installed RedHat packages. It will, however, almost certainly leave some packages behind. My approach was to run that command multiple times until it was no longer able to replace any of the remaining packages due to various dependency problems. I then carried out another reboot to satisfy myself that nothing was broken.
 
At this point you can get a full list of what’s left by running this command:

rpm -qa --qf "%{NAME}-%{VERSION}-%{RELEASE} %{VENDOR} \n" | grep "Red Hat"

It’s pretty much up to you how much work you want to put into replacing any remaining RedHat packages. The notes below highlight a handful of extra items that I decided to tackle:
 

Replace ntp/ntpdate packages

# cp /etc/ntp/step-tickers /etc/ntp/step-tickers.keep
# cp /etc/ntp.conf /etc/ntp.conf.keep
# yum remove ntp ntpdate
# yum clean all
# yum install ntp ntpdate
# mv /etc/ntp/step-tickers.keep /etc/ntp/step-tickers
# mv /etc/ntp.conf.keep /etc/ntp.conf

Replace plymouth boot scripts

# rpm -e --nodeps plymouth plymouth-scripts plymouth-core-libs
# yum clean all
# yum install plymouth plymouth-scripts plymouth-core-libs

Replace dbus-glib package

# rpm -e --nodeps dbus-glib
# yum clean all
# yum install dbus-glib

Replace initscripts package

# rpm -e --nodeps initscripts
# yum clean all
# yum install initscripts

At this point in the process, you should see that just about the only RedHat packages left are for the kernel. The recommendation is to just ignore them, on the grounds that trying to safely remove/replace a running kernel is just too risky. The test server where this process was developed/tested installed an updated CentOS kernel without incident the next time an updated package was released. The default /etc/yum.conf setting is to retain a maximum of 5 installed kernels, so over time, the RedHat kernels will gradually be aged out.

Rusty Engineering Skills

I’ve been involved with IT since leaving college slightly over 30 years ago and for a couple of years before that, when I got swept up in the illegal CB radio craze in my mid to late teens, I used to supplement my pocket money as one of the local ‘rig doctors’, mostly replacing output stage transistors for people who did mad things like trying to use a metal coat hanger or their standard car radio aerial on 27MHZ. Perhaps if the muse takes me one day, I’ll write an article about the little cottage industry myself and one of my CB buddies (who was a BT RF engineer at the time) had going with rig repairs and home brewed accessories.

Computers have almost always (in my lifetime at least!) been so much easier to work on than transceivers, because the fix for most faults is a complete replacement of a self-contained component rather than a board-level repair. Ironically, though, the last 2 major faults that have befallen our main computer at home have required me to get my soldering iron out, which has had a certain nostalgia value about it, not to mention the fact that the second soldering iron episode highlighted just how out of practice I’ve got with the basic fault finding skills that I learned in my teens whilst repairing other peoples CB radios.

The first problem occured during what passed for the summer in 2012. The 22-inch LCD monitor which I’ve had for some time finally gave up the ghost, after a couple of weeks of needing to be switched off and back on again several times in succession in order to get a stable picture. I’d resigned myself to having to buy a replacement and was all set to take it to our local council waste site for proper disposal. Since I know very little about how TVs and computer screens actually work, I thought it might be educational to take it apart and see what was inside it. After removing 12 screws to take the back off and disconnecting the ribbon cable to the LCD panel, I was slightly disappointed to find just 2 fairly bland circuit boards. The smaller of the two had just 3 or 4 VLSI or very large scale integrated circuits on it and virtually no other components, so I assumed that it was the bit that did the job of decoding the digital video signal from the PC and putting it on the 1680×1050 screen. To my surprise, the larger board was quite clearly a switched mode power supply. To my further surprise, the fault was immediately obvious purely based on a visual inspection. There were 5 radial lead electrolytic capacitors at one end of the board, all of the mega-cheap chinese import variety. Each of these was bowed outwards quite significantly at the top, indicating that they had almost reached the point at which the magic smoke would leak out.

Swiftly bringing to bear my noble soldering iron of justice, I soon had the suspect components on the bench in front of me. They were all 100uF 16v, it was 4:20pm on a Sunday afternoon and my local branch of Maplin was still open for another 40 minutes. Ignoring the cries of “What do you mean you’re going to try to fix it?” from my wife, I jumped in the car and headed to the aforementioned purveyors of electrical and electronic components.

Some time later and the princely sum of £2.75 spent, I had a bag of 5 replacement capacitors, a hot soldering iron and a somewhat incredulous wife and mother-in-law, both with slightly worried looks on their faces. Not to be deterred, I spent the next few minutes carefully desoldering the pads where the original components had been mounted, soldering the new capacitors in place and trimming back the leads to make a neat job. I managed to put the monitor back together without having any parts left over and was feeling quite pleased with myself and my out-of-practice engineering skills, announcing that I was ready to test my repair and see if it had ‘done the trick’.

At this point, my wife suggested that if I was planning to put myself at risk of electrocution, it would be best to do so in the back garden rather than clutter up the house with my scorched and blackened corpse. Ever the obedient husband, I took the computer, keyboard and mouse outside along with an extension lead, the newly repaired monitor and a VGA cable. This was all deposited on the patio table and connected together as appropriate. Upon gingerly applying mains power, I was delighted to see the monitor come up as normal on the first go with an absolutely rock-steady picture. After leaving everything running for 15 minutes, the range safety officer (a.k.a my wife) declared herself satisfied that the fire/electrocution risk seemed to be minimal and I was finally allowed to bring the repaired device back into the family home.

I really didn’t think I’d ever have cause to use my soldering iron to fix another computer problem. How wrong I was.

I decided at the end of February 2013 that it was about time to upgrade the home computer to Windows 7. Windows XP was fast approaching its ‘best before’ date and I’d always had a 32-bit version installed, despite having a computer that was 64-bit ready in almost all respects. After much wavering, I decided to take the plunge.

In preparation for the upgrade, I ordered an extra 8GB of memory, to bring the total installed up to 12GB. When this arrived, it also seemed that now was as good a time as any to strip the computer down and give it a good internal spring clean, removing all the dust, cleaning out fan filters, etc, etc.

So it was that I found myself once again out in the back garden, this time with the computer itself in bits around me. It took about 3 hours of careful disassembly, cleaning and dusting before I was happy with the result. During this process, I noticed that one of the front panel mounted USB connectors (that had been playing up for a while) had a loose connection to one of the pins on its associated header plug. Since I already had the machine in bits anyway, I stripped off the whole front panel switch and connector assembly and carefully teased the tiny little pin and attached wire back into place. I also took the opportunity to re-arrange some of the cable routing inside the case, both to make it look neater and to try and improve the airflow from the cooling fans.

I re-assembled everything, slotted in the extra 8G of memory and spent a moment admiring my shiny, new-looking computer before taking it back indoors. I wired everything back up, including mains power, speakers, USB printer, USB wireless adapter, monitor cable and USB wireless mouse/keyboard doo-hickey. Switching the mains back on, I saw the power LED light up on the front panel, the main chassis and CPU fans spun up for about a second and then everything went dead. No reassuring ‘beep’ to indicate that the power-on self-test or POST had completed successfully, nothing displayed on the freshly-repaired monitor, just complete silence and no sign of any activity at all. The machine hadn’t even had time to get to the point (usually after a couple of seconds) where it spins up the hard drive. The front panel power switch wasn’t causing aything to happen when I pressed it, but I found that if I pulled the kettle lead out of the power supply for 30 seconds then plugged it back in, I’d get the same thing as before, CPU and case fan spinning for about 1 second with the power LED lit, nothing else happening and a total shutdown almost straight away.

Okay, not to worry, I’ve had the whole thing in bits, I’ve obviously connected something back up wrong or left something not quite seated correctly. The first suspect was the newly-fitted memory modules. I discovered by trial and error that the machine would only do anything different if I removed all of its 4 memory modules. It still wouldn’t respond to the front panel power switch, but would at least get as far as the POST checks, at which point I’d get 2 short beeps in a row, which is the signal with this BIOS that it has failed to find any useable memory. I began to suspect at this point that all 4 memory modules had failed for some reason, which was disappointing since I didn’t have any spare DDR3 sticks in my spares box that I could test with.

Without boring you with all the details, I spent another hour trying to diagnose the problem without success. When I got the point where I’d disconnected and/or removed every component and external cable apart from the CPU and it still wouldn’t power up, I started to wonder if my spring clean had killed something vital on the motherboard with a dose of static electricity. Utterly despondent at this point, I put everything back together and had one more go at powering it up. Still no good. I sat and had a think about the problem over a cup of coffee, mentally reviewing what has to happen in order to get a modern PC to power up and load its operating system. It struck me that my last test had been to have the power LED header plug and the front panel switch header plug connected to the motherboard and nothing else and yet it would still only respond if I removed the kettle lead from the power supply for 30 seconds and plugged it back in. I was effectively simulating a power cut, in which case the machine was doing exactly what it was set to do in the BIOS, namely to stay powered off when the electrical supply was restored. The fact that pressing the front panel power button had had no effect in any of the tests I had devised had somehow slipped past me un-noticed.

I took the side panel off the computer once more, pulled out the power button header from the motherboard and shorted out the pins with a flat bladed screwdriver. Lo and behold, the computer powered on as normal, only complaining that it couldn’t find an operating system on the hard drive, which was because I’d completely wiped the drive in preparation for the upgrade to Windows 7. The fault appeared to be with either the actual power button itself or the wiring between the button on the front panel and the motherboard.

I quickly removed the whole front panel assembly again and inspected the ribbon cable connecting the small PCB holding the power and reset buttons to the motherboard. Instead of shiny blobs of solder where the 4-strand ribbon cable met the board, the solder on 3 of the 4 pads was dull and grey, classic signs of a ‘dry joint’. I’d obviously disturbed the cable durng the teardown and re-assembly process sufficiently for the poor quality soldering to make its effects known.

So the soldering iron came out again for the second time in 6 months. I removed all 4 ribbon cable strands and cleaned out the pads on the PCB. I trimmed the 4-way ribbon cable back by about 1.5cms and stripped 4 shiny new ends on each strand. After tinning the bare wires with the soldering iron, I re-attached the whole cable to the PCB, freshly soldered and secure. This time, the computer fired up on the first press of the power button and I was finally able to get Windows 7 installed and restore all the documents, pictures, videos and other data from a recent backup.

I’m hoping it will be a long time before I need to get that soldering iron out again, especially if it involves a broken computer or peripheral.