Thursday, July 20, 2017

Backing up Ancient Linux Virtual Machine

Many years ago, perhaps over a decade, I designed my network configuration around the "trusted" and "DMZ" separation of zones mode. Operating within the DMZ (De-Militarized Zone) are services which are directly exposed to incoming connections from the Internet. These systems are "hardened" so they should be more difficult to be hacked by a malicious script kiddie. All of these systems are running in Virtual Machines. By leveraging the VM "snapshot" feature, restoring a hacked/vandalized or otherwise damaged DMZ system is simple, restore the snapshot and the system is operational again. VMware Workstation also offers strong hardware isolation for multiple systems running on the same physical hardware. This approach also allows "portability" of DMZ systems to improved hardware. Install VM software, restore the virtual machine disks from backup and the environment is up and running.

Fast forward to 2017. Over time, I've changed which physical hardware is hosting the DMZ environment. Originally, these were all running on a 2 processor Pentium Pro W6LI motherboard, with 512MB of memory. I was able to squeeze 5 VMware Workstation VMs onto this system, barely. Over the years, the need for faster compute, more disk, memory, the DMZ environment is now running on a Dell PowerEdge R610. I've also virtualized other important non-DMZ development computers. The operating systems are all Linux, and all very old. RedHat 9 is the "newest" of the lot running in this environment.

Backing up these VMs is obviously important, as these are core compute and infrastructure systems. However, no backup is worth the disk space it is consuming if you cannot restore the backed up computer to a running system. Proving these systems could be restored proved to be a much larger task than I expected.

So far, only one important RedHat 9 VM is backed up to a 500GB IDE hard disk. This disk is "scavenged" from a failed Buffalo 2TB RAID device, which contained 4-500GB IDE disks. The disks are fine, the RAID controller failed. This IDE disk has been brought back to life using an PATA/SATA to USB disk enclosure.

With this RedHat 9 system is now "backed up" at the file system level, my fear of data loss is less concerning. However, I'm still proving I can "recover" this system from backup. This is easier said than done.

Plan 1: Create a new virtual disk, partition it like the RH9 system. Boot a Live CD system, mount this disk, copy the backup data to this virtual disk.
Problem 1: Making this disk bootable. Fortunately, this RH9 system boots with GRUB vs LILO. Unfortunately, this is a GRUB 1 implementation. This appears to be imcompatible with GRUB 2, found on modern distributions, like Mint 18 or Ubuntu 16.04. No amount of fiddling around with GRUB, and other Linux Rescue tools were able to make this "recovered" virtual disk bootable. My last ditch attempt was to copy the actual 512 bytes MBR from the original RH9 system to the restored disk. That "almost booted".

Plan 2: Install a minimal RH9 system from the original distribution ISO images. That will definitely make a bootable system.
Problem 2: Finding the correct Virtual Machine hardware which is compatible with the Linux kernel/disk drivers present on the RH9 installation media.

Choosing "ESXi 5.0 and later (VM version 8)" virtual hardware, and "Debian GNU/Linux 4 (64-bit)" OS type worked. I had the proper disk driver which would read the file system present on the RH9 ISO disks. Fantastic!

I now have a bare-bones RH9 system installed. Copying files with ssh from my backup NAS server is the next hurdle. Modern sshd servers use more secure cipher and key exchange (KEX) algorithms than are found on the RH9 ssh client. My NAS server is running sshd 7.2p2, OpenSSL 1.0.2g. The restored RH9 system is running ssh 3.5p1, OpenSSL 0x0090701f (0.9.7.1f?). Attempting to log in to the NAS server from the RH9 system failed with "no matching cipher found".

I've encountered this problem in the past. My solution was to build from source OpenSSL/OpenSSH. This is not a good option for this configuration. Installing all of the build environment on this ancient RH9 system is too much effort. Time to finally solve by configuration this issue. I've gone down this path before, with some success. The first configuration change on the NAS sshd server added to /etc/ssh/sshd_config:

     Ciphers 3des-cbc,blowfish-cbc,cast128-cbc,arcfour,arcfour128,arcfour256,aes128-cbc,aes192-cbc,aes256-cbc,rijndael-cbc@lysator.liu.se,aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm@openssh.com,aes256-gcm@openssh.com,chacha20-poly1305@openssh.com

This gets past the "no matching cipher found" error. Now the error is "no kex alg". That is where I gave up previously. Not an option today. I found the missing configuration needed on another blog post:

    KexAlgorithms diffie-hellman-group1-sha1,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1

With these to additions to the target server's /etc/ssh/sshd_config file, kill -HUP SSHD_PID, success!

Security concerns: All systems I am working on are running on my trusted network, so I'm not worried about people breaking the traffic on the network. However, I will comment out these "weak ciphers/kex alg" settings after the restored RH9 system is proven working.

Ref:  http://sysadm.mielnet.pl/no-kex-alg/
         http://steronius.blogspot.com/2014/10/ssh-no-matching-cipher-found.html

No comments:

Post a Comment