Automating the installation of RedHat Virtualization hosts

Red Hat Virtualization Host (RHVH) can be installed manually, but this method may not scale to a large enterprise environment with many hosts. Since the installer for RHVH is a version of Anaconda from Red Hat Enterprise Linux, you can automate the RHVH installation process by using Kickstart.

You can use Kickstart, in conjunction with PXE and TFTP, to start the installation by booting from the network, allowing quick and fully unattended automatic installation of new RHVH hosts.

Booting from the network using PXE

Preboot eXecution Environment (PXE) is a mechanism to bootstrap computers by using a network server. The client’s network interface must have support for PXE, and the system firmware must have PXE support enabled.

The network boot infrastructure must provide the following services:

  • A DHCP server to handle the initial communication with the client, to provide network configuration, and to provide the location of the TFTP server and the boot image to use.
  • A TFTP server to provide boot images and command-line options for the boot images to start the installer.
  • An HTTP, FTP, or NFS server to provide the RHVH installation media and the Kickstart file used for the installation

At boot, the client’s network interface card broadcasts a DHCPDISCOVER packet extended with PXE-specific options. A DHCP server on the network replies with a DHCPOFFER, giving the client information about the PXE server and offering it an IP address. Once the client responds with a DHCPREQUEST, the server sends back a DHCPACK containing the location of a file on a Trivial FTP (TFTP) server that can boot the client.

The client connects to the TFTP server (frequently the same machine as the DHCP server), downloads the specified file to RAM, and verifies the file using a checksum. For RHVH, this is normally a network bootloader called pxelinux.0. That bootloader has a configuration file on the TFTP server that tells it how to download and start the RHVH installer, and the location of the Kickstart file for automated installation on some HTTP, FTP, or NFS server. Once the files are verified, they are used to boot the client.

Normally, to install a new operating system on your server, you need installation media from which you can begin an installation. When using the network installation method, no physical boot media is required because the network boot server provides all required files.

A PXE boot environment is useful for system deployment. Ideally, machines are configured in the firmware to boot from a local hard drive first, and if that fails, to boot from the network. The network boot is set up to trigger an automatic Kickstart. As long as the machine has a valid boot loader on the hard drive, the installation is left alone. If the hard drive has no boot loader, it is a new machine and it gets kickstarted. With this type of configuration, an automatic reinstallation can be started by destroying the hard drive boot loader and rebooting.

Configuring PXE Boot Service

To summarize, in order to configure automatic network installation of RHVH, you need to do the following:

  • Configure a DHCP server to use PXE, pointing your RHVH clients to the TFTP server and its pxelinux.0 file.
  • Configure a TFTP server to provide pxelinux.0 and its configuration file, which points to the RHVH installer’s kernel, software, and the location of the Kickstart file.
  • Export the RHVH installation media and the Kickstart file using a supported network service such as NFS, HTTPS, HTTP, or FTP.
  • Boot the system you want to install and start the installation.

The following procedure provides an overview of how to configure the DHCP and TFTP parts of the network boot system for RHVH using Red Hat Enterprise Linux. More information on how to configure these services for Red Hat Enterprise Linux installation, which is very similar, can be found in the chapter “Preparing for a Network Installation” in the Installation Guide for Red Hat Enterprise Linux 7 at https://access.redhat.com/documentation/.

WARNING:: Your organization may already have an operating DHCP server or PXE environment configured on the network that your RHVH hosts use. You should work with that system’s administrator to integrate the configuration changes needed on the DHCP and TFTP servers for your automated RHVH installation system. If you configure a second DHCP server on a network already operating one, the servers will interfere with each other’s operation and with the configuration of network settings for hosts on that network. This can cause major service disruptions for that network.

Configuring DHCP and TFTP Servers

When using Red Hat Enterprise Linux server as a source for booting BIOS-based AMD64 and Intel 64 systems, here is an example of how you might configure the environment. Assume for this example that the DHCP and TFTP servers are on the same system, and has the IPv4 address 172.25.250.8.

IMPORTANT: This procedure assumes that you are not booting hosts that use a UEFI-based boot process. A UEFI-based system requires some files from the shim and grub2-efi packages, and a different configuration file.

1. Install a Red Hat Enterprise Linux server with the syslinux, tftp-server, and dhcp packages.
2. Within the /var/lib/tftpboot directory, create a pxelinux directory and copy the file /usr/share/syslinux/pxelinux.0 into it:

[[email protected] ~]# mkdir /var/lib/tftpboot/pxelinux
[[email protected] ~]# cp /usr/share/syslinux/pxelinux.0 /var/lib/tftpboot/pxelinux

3. In the /var/lib/tftpboot/pxelinux directory, create a pxelinux.cfg directory:

[[email protected] ~]# mkdir /var/lib/tftpboot/pxelinux/pxelinux.cfg

4. Create a default configuration file in the /var/lib/tftpboot/pxelinux.cfg directory. This is used for any system PXE-booting from this service.

This is a sample configuration file:

default vesamenuc32
prompt 1
timeout 60

display boot.msg

label rhvh-host
 menu label ^Install RHVH host
 menu default
 kernel vmlinuz
 append initrd=initrd.img ip=dhcp inst.stage2=http://install-server/RHVHinstallation-media-directory 
 inst.ks=http://install-server/kickstart-filedirectory/kickstart-file.cfg

The important parts of this configuration file are:

  • label rhvh-host is the bootloader configuration for RHVH installation, which appears in the menu as Install RHVH Host
  • The vmlinuz and initrd.img files need to be provided by the TFTP server from /var/ lib/tftpboot/pxelinux. The next step of this procedure puts them in place.
  • The inst.stage2 directive in this example points to a URL for an HTTP server (installserver) that has installation media available, and the inst.ks directive in this example points to a URL for an HTTP server that has a Kickstart file available. The next part of this section demonstrates how to set up both.

5. Copy the boot image from the RHVH ISO file to the /var/lib/tftpboot directory.Assuming that the RHVH ISO file has been downloaded to /tmp/RHVH-4.1-dvd1.iso:

[[email protected] ~]# mount -o loop /tmp/RHVH-4.1-dvd1.iso /mnt
[[email protected] ~]# cp /mnt/images/pxeboot/{vmlinuz,initrd.img} /var/lib/tftpboot/
pxelinux
[[email protected] ~]# umount /mnt

6. Set up the DHCP server’s configuration file, /etc/dhcp/dhcpd.conf.

The following example provides basic network information for the 172.25.250.0/24 subnet to clients, and points clients trying to PXE-boot to the TFTP server on 172.25.250.8 (the machine’s address in this example). The clients download and boot with pxelinux/ pxelinux.0 from that server.

option space pxelinux;
option pxelinux.magic code 208 = string;
option pxelinux.configfile code 209 = text;
option pxelinux.pathprefix code 210 = text;
option pxelinux.reboottime code 211 = unsigned integer 32;
option architecture-type code 93 = unsigned integer 16;

subnet 172.25.250.0 netmask 255.255.255.0 {
  option routers 172.25.250.254;
  option subnet-mask 255.255.255.0;
  option domain-search "lab.example.com";
  option domain-name-servers 172.25.250.254;
 
 range 172.25.250.21 172.25.250.30;

 class "pxeclients" {
     match if substring (option vendor-class-identifier, 0, 9) = "PXEClient";
     next-server 172.25.250.8;

     if option architecture-type = 00:07 {
       filename "uefi/shim.efi";
     } else {
       filename "pxelinux/pxelinux.0";
     }
   }
}

7. Allow the services to communicate through your firewall. If you are using firewalld, these include its predefined tftp and dhcp services.

8. Start and enable the dhcpd and tftp services.

When finished, the PXE boot server is ready. You can now start the system you want to install, selecting the PXE boot method to start the manual network installation.

Preparing the installation media and kickstart server

To run the RHVH installation, the PXE server points the client to two things: a live image file containing the RHVH operating system and a Kickstart file.

These can be provided to clients in a number of different ways, including HTTP and NFS. This example assumes that you have got a web server that can use HTTP to serve the files to the client.

Providing the RHVH Disk Image

The following procedure shows how to prepare content on your existing web server for the unattended network installation of new RHVH hosts:

1. Extract the live image of the RHVH operating system from the RHVH installation ISO. This example assumes you have downloaded that ISO to /tmp/RHVH-4.1-dvd1.iso.

[[email protected] ~]# mount -o loop /tmp/RHVH-4.1-dvd1.iso /mnt
[[email protected] ~]# cp /mnt/Packages/redhat-virtualization-host-image-* /tmp
[[email protected] ~]# cd /tmp
[[email protected] ~]# rpm2cpio redhat-virtualization-host-image-*|cpio -idmv
[[email protected] ~]# umount /mnt

2. Copy the live image extracted from the redhat-virtualization-host-image-update package to a directory served by the web server. You might choose to rename the file to make it easier to reference:

[[email protected] ~]# cp /tmp/usr/share/redhat-virtualization-host/image/redhat-virtualization-host-image-4.1-*.squashfs.img /var/www/html/installationmedia-directory/squashfs.img

The squashfs.img file is needed for the Kickstart to work. This file contains the RHVH operating system with all required packages, and it automatically deploys the new RHVH host.

Red Hat Virtualization Host Kickstart File

The Kickstart file for an unattended network installation of RHVH hosts is simple because you do not have to specify packages to install when using squashfs.img. Everything required is embedded into that file. Even the partition layout is automatically created using the LVM Thin Provisioning mechanism.

NOTE: There are times you might not want to configure the PXE system to automatically run Kickstart. For example, you may want a PXE boot to trigger a local boot by default so that an inadvertent network boot doesn’t reinstall the RHVH system. Alternatively, you may want to manually specify the location of an alternative Kickstart file when you boot a new system.

You can create the Kickstart file using your favorite text editor. It is a good practice to name it so that the name helps you later identify the purpose of that Kickstart file (for example,rhvh_primarydc.cfg). Here is an example of how to create a Kickstart file for RHVH hosts automatic installation:

1. Specify the URL for the RHVH installation tree on your network:

liveimg --url=http://install-server/RHVH-installation-media-directory/squashfs.img

2. Define the partition layout for the new host. This example removes any existing partitions,creates a new LVM-based layout, and clears the MBR:

clearpart --all
autopart --type=thinp
zerombr

3. Set the root user password. While this example uses plain text, other Kickstart directives that insert a hash are possible.

rootpw --plaintext root_password_in_clear_text

4. Define the timezone for the host to use (this example uses UTC):

5. Set the UI mode for the installer. In this example, it is text mode:

6. Reboot the host after the installation completes:

7. In the %post section, start the configuration process of the newly installed RHVH host.

%post --erroronfail
nodectl init
%end

The final Kickstart file from this example:

liveimg --url=http://install-server/RHVH-installation-media-directory/squashfs.img
clearpart --all
autopart --type=thinp
zerombr
rootpw --plaintext root_password_in_clear_text
timezone Etc/UTC --isUtc
text

reboot

%post --erroronfail
nodectl init
%end

This example can be extended with additional commands and options as required, but it is enough to automatically install a new RHVH host.

To use the new Kickstart file, you must share it over the network using HTTP, NFS, or FTP. The example pxelinux.cfg file in the preceding procedure assumed that you put it in a directory on a web server (/var/www/html/kickstart-file-directory/kickstart-file.cfg)

Starting the automated installation

The preceding examples assumed that you want to start a RHVH Kickstart automatically whenever your nodes use PXE to boot. This is controlled by the /var/lib/tftpboot/pxelinux/pxelinux.cfg file on the example server. If you’re using a UEFI-based system, the configuration differs slightly.

No matter which boot loader you use, it needs to download and start the vmlinuz kernel from the RHVH installation ISO file. That kernel needs to be started with four command-line arguments:

  • initrd=initrd.img to download the initial RAM disk file (initrd.img) that came with the vmlinuz file on the RHVH installation ISO
  • ip=dhcp to get an IP address using DHCP
  • An inst.stage2 directive pointing to an HTTP, NFS, or FTP URL containing the squash.img file from the redhat-virtualization-host-image-update package
  • An inst.ks directive pointing to an HTTP, NFS, or FTP URL for the Kickstart file

Removing a Host from a Data Center

Now lets see how can we remove a host from an existing data center and assign it to a different data center.

Changing the infrastructure

Depending on the usage of your Red Hat Virtualization environment, the number of virtual machines in use, the structure of your physical infrastructure, the intended networking infrastructure, there might arise the need for change in configuration. Red Hat Virtualization infrastructure is very flexible. It allows you to change its structure and design. Even a very complicated architecture can be modified. There are, however, some basic rules you have to follow.

This section of the post describes the rules and procedures for moving physical RHVH hosts between existing data centers and clusters. There may be many reasons for you to change your RHV configuration. Here is a list of possible reasons for moving or permanently removing an RHVH host from your data centers:

  • Extending the capacity a data center has for running more virtual machines.
  • Decommission of a data center or a cluster.
  • Changing the underlying storage infrastructure.

Using Maintenance Mode

Every major change you make to the RHV infrastructure requires you to put the resource you want to modify into a special state called maintenance mode. This mode allows you to make permanent changes to any resource.

Depending on the defined policy, switching one of your RHVH hosts into Maintenance mode will migrate or shut down all virtual machines running on that host. When that operation finishes, RHVM changes the status of the RHVH host from Active to Maintenance mode.

When a RHVH host is in Maintenance mode, additional reconfiguration possibilities are present in the Administration Portal for that host. For example, the Edit Host dialog window unblocks editing features and gives you the ability to switch the RHVH host from one cluster to a different one.

On rare occasions, you might face the situation where the RHVH host is the last host left in a data center. Switching the last active host into Maintenance mode makes that data center, along with its storage domains, unusable.

Before you can move the last host in a data center into Maintenance mode, you must switch all existing storage domains in that data center into Maintenance mode. There must be an active host in the data center to act as the Storage Pool Manager (SPM) to make changes to the data center’s storage configuration.

Only when all storage domains in a data center are in Maintenance mode are you able to put the last active host into Maintenance mode. When the last storage domain in a data center is switched into Maintenance mode, the whole data center switches automatically into Maintenance mode. In this mode, the data center does not produce any log outputs. It stays that way even if the whole infrastructure restarts, until the master storage domain enters the Active state again.

Removing a host from RedHat Virtualization

This is the process for complete removal of a host from RHV infrastructure:

  1. Log in to the Administration Panel as an administrative user.
  2. Go to the Hosts tab.
  3. Select the host you want to remove from your Red Hat Virtualization infrastructure and click on the Management button.
  4. In the Management drop-down list, click the Maintenance mode. When the Maintenance Host(s) dialog window appears, click OK to accept placing the host into maintenance mode.Wait for the process to finish switching the host to the Maintenance status.
  5. When the host is switched into Maintenance mode, the Remove button become active. With the host selected, click the Remove button to completely remove the host from your RHV infrastructure. Click the OK button to confirm the removal of the host.

Moving a host between data centers

Here is the process to move a host from one data center to another:

  1. Log in to the Administration Panel as an administrative user.
  2. Go to the Hosts tab.
  3. From the list of available hosts, select the host you want to move to another data center, and click the Management button.
  4. In the Management drop-down list, click the Maintenance mode. When the Maintenance Host(s) dialog window appears, click the OK to accept placing the host into maintenance mode. Wait for the process to finish switching the host to the Maintenance status.
  5. With the host selected, click the Edit button.
  6. When the Edit Host dialog window appears, click the Host Cluster drop-down list and choose the new cluster in the new data center that you want that host switched to.
  7. Confirm by clicking the OK button twice.
  8. With the host selected, click the Management drop-down list. In the Management drop-down list, click the Activate mode.
  9. When the host successfully activates, its status icon changes from red to green. The host becomes active, using the new data center and cluster. This confirms that it has successful attached to a new data center.