Objective 3: Change Runlevels and Shut Down or Reboot System Linux has the same concept of runlevels runlevels that most Unix systems offer. This concept specifies different ways to use a system by controlling which services are running. For example, a system that operates as a web server is configured to boot and initiate processing in a runlevel designated for sharing data, at which point the web server is started. However, the same system could be booted into another runlevel used for emergency administration, when all but the most basic services are shut down, so the web server would not run. that most Unix systems offer. This concept specifies different ways to use a system by controlling which services are running. For example, a system that operates as a web server is configured to boot and initiate processing in a runlevel designated for sharing data, at which point the web server is started. However, the same system could be booted into another runlevel used for emergency administration, when all but the most basic services are shut down, so the web server would not run.

One common use of runlevels is to distinguish between a system that offers only a text console and a system that offers a graphical user interface through the X Window System. Most end-user systems run the graphical user interface, but a server (such as the web server just discussed) is more secure and performs better without it.

Runlevels are specified by the integers 0 through 6. Runlevels 0 and 6 are unusual in that they specify the transitional states of shutdown and reboot, respectively. When an administrator tells Linux to enter runlevel 0, the operating system begins a clean shutdown procedure. Similarly, the use of runlevel 6 begins a reboot. The remaining runlevels differ in meaning slightly among Linux distributions and other Unix systems.

When a Linux system boots, the first process it begins is the init init process, which starts all other processes. The process, which starts all other processes. The init init process is responsible for placing the system in the default runlevel, which is usually 2, 3, or 5 depending on the distribution and the use for the machine. Typical runlevel meanings are listed in process is responsible for placing the system in the default runlevel, which is usually 2, 3, or 5 depending on the distribution and the use for the machine. Typical runlevel meanings are listed in Table4-1 Table4-1.

Table4-1.Typical runlevels



Runlevel Description 0 Halt the system. Runlevel 0 is a special transitional state used by administrators to shut down the system quickly. This, of course, shouldn"t be a default runlevel, because the system would never come up; it would shut down immediately when the kernel launches the init process. See also runlevel 6.

1, s, S Single-user mode, sometimes called maintenance mode. In this mode, system services such as network interfaces, web servers, and file sharing are not started. This mode is usually used for interactive filesystem maintenance. The three choices 1, s, and S all mean the same thing.

2 Multiuser. On Debian-based systems, this is the default runlevel. On Red Hatbased systems, this is multiuser mode without NFS file sharing or the X Window System (the graphical user interface).

3 On Red Hatbased systems, this is the default multiuser mode, which runs everything except the X Window System. This and levels 4 and 5 usually are not used on Debian-based systems.

4 Typically unused.

5 On Red Hatbased systems, full multiuser mode with GUI login. Runlevel 5 is like runlevel 3, but X11 is started and a GUI login is available. If your X11 cannot start for some reason, you should avoid this runlevel.

6 Reboot the system. Just like runlevel 0, this is a transitional device for administrators. It should not be the default runlevel, because the system would eternally reboot.

It is important to note that runlevels, like most things in Linux, are completely configurable by the end user. For the purposes of the LPIC test, it"s important to know the standard meanings of each runlevel on Red Hatbased and Debian-based systems and how the runlevels work. However, in a production environment, runlevels can be modified to do whatever the system administrator desires.

Single-User Mode Runlevel 1, the single-user runlevel, is a bare-bones operating environment intended for system maintenance. In single-user mode, remote logins are disabled, networking is disabled, and most daemons are not started. Single-user mode is used for system configuration tasks that must be performed with no user activity. One common reason you might be forced to use single-user mode is to correct problems with a corrupt filesystem that the system cannot handle automatically.

If you wish to boot directly into single-user mode, you may specify it at boot time with the kernel"s command line through your boot loader. For instance, the GRUB boot loader allows you to pa.s.s arbitrary parameters to a kernel at boot time. In order to change the default runlevel, edit the line that boots your kernel in the GRUB interactive menu, adding a 1 1 or the word or the word single single to the end of the line to indicate single-user mode. These arguments are not interpreted as kernel arguments but are instead pa.s.sed along to the init process. For example, if your default GRUB kernel boot line looks like this: to the end of the line to indicate single-user mode. These arguments are not interpreted as kernel arguments but are instead pa.s.sed along to the init process. For example, if your default GRUB kernel boot line looks like this: kernel/vmlinuz-2.6.27.21-170.2.56.fc10.i686roroot=/dev/hda1rhgbquiet you can force the system to boot to runlevel 1 by changing this to: kernel/vmlinuz-2.6.27.21-170.2.56.fc10.i686roroot=/dev/hda5rhgbquiet1 or: kernel/vmlinuz-2.6.27.21-170.2.56.fc10.i686roroot=/dev/hda5rhgb quietsingle To switch into single-user mode from another runlevel, you can simply issue a runlevel change command with init init: #init1 This is not the preferred way of taking a currently running system to runlevel 1, mostly because it gives no warning to the existing logged-in users. See the explanation of the shutdown shutdown command later in this chapter to learn the preferred way of handling system shutdown. command later in this chapter to learn the preferred way of handling system shutdown.

Overview of the /etc Directory Tree and the init Process By themselves, the runlevels listed in Table4-1 Table4-1 don"t mean much. It"s what the don"t mean much. It"s what the init init process does as a result of a runlevel specification or change that affects the system. The actions of process does as a result of a runlevel specification or change that affects the system. The actions of init init for each runlevel are derived from the style of initialization in Unix System V and are specified in a series of directories and script files under the for each runlevel are derived from the style of initialization in Unix System V and are specified in a series of directories and script files under the /etc /etc directory. directory.

When a Linux system starts, it runs a number of scripts in /etc /etc to initially configure the system and switch among runlevels. System initialization techniques differ among Linux distributions. The examples in this section are typical of a Red Hatbased system. Any distribution compliant with the Linux Standards Base (LSB) should look similar. The following describe these files: to initially configure the system and switch among runlevels. System initialization techniques differ among Linux distributions. The examples in this section are typical of a Red Hatbased system. Any distribution compliant with the Linux Standards Base (LSB) should look similar. The following describe these files: /etc/rc.sysinit or or /etc/init.d/rcS /etc/init.d/rcS On Red Hatbased systems, rc.sysinit rc.sysinit is a monolithic system initialization script. The Debian is a monolithic system initialization script. The Debian rcS rcS script does the same job by running several small scripts placed in two different directories. In each case, the script is launched by script does the same job by running several small scripts placed in two different directories. In each case, the script is launched by init init at boot time. It handles some essential ch.o.r.es to prepare the system for use, such as mounting filesystems. This script is designed to run before any system daemons are started. at boot time. It handles some essential ch.o.r.es to prepare the system for use, such as mounting filesystems. This script is designed to run before any system daemons are started.

/etc/rc.local Not used on Debian-based systems. On Red Hatbased systems, this file is a script that is called after all other init init scripts (after all system daemons are started). It contains local customizations affecting system startup and provides an alternative to modifying the other scripts (after all system daemons are started). It contains local customizations affecting system startup and provides an alternative to modifying the other init init scripts. Many administrators prefer to avoid changing scripts. Many administrators prefer to avoid changing rc.sysint rc.sysint because those changes will be lost during a system upgrade. The contents of because those changes will be lost during a system upgrade. The contents of rc.local rc.local are not lost in an upgrade. are not lost in an upgrade.

/etc/rc This file is a script that is used to change between runlevels. It is not provided on Debian.

The job of starting and stopping system services (also known as daemons daemons, which are intended to always run in the background, such as web servers) is handled by the files and symbolic links in /etc/init.d /etc/init.d and by a series of runlevel-specific directories named and by a series of runlevel-specific directories named /etc/rc0.d /etc/rc0.d through through /etc/rc6.d /etc/rc6.d. These are used as follows: /etc/init.d This directory contains individual startup/shutdown scripts for each service on the system. For example, the script /etc/init.d/httpd /etc/init.d/httpd is a Bourne sh.e.l.l script that performs some sanity checks before starting or stopping the Apache web server. is a Bourne sh.e.l.l script that performs some sanity checks before starting or stopping the Apache web server.These scripts have a standard basic form and take a single argument. Valid arguments include at least the words start start and and stop stop. Additional arguments are sometimes accepted by the script; examples are restart restart, status status, and sometimes reload reload (to ask the service to reread its configuration file without exiting). (to ask the service to reread its configuration file without exiting).Administrators can use these scripts directly to start and stop services. For example, to restart Apache, an administrator could issue commands like these:#/etc/init.d/httpdstop #/etc/init.d/httpdstartor simply:#/etc/init.d/httpdrestartEither form would completely shut down and start up the web server. To ask Apache to remain running but reread its configuration file, you might enter:#/etc/init.d/httpdreloadThis has the effect of sending the SIGHUP signal to the running httpd httpd process, instructing it to initialize. Signals such as SIGHUP are covered in process, instructing it to initialize. Signals such as SIGHUP are covered in Chapter6 Chapter6.If you add a new service through a package management tool such as rpm rpm or or dpkg dpkg, one of these initialization files may be installed automatically for you. In other cases, you may need to create one yourself or, as a last resort, place startup commands in the rc.local rc.local file. file.It"s important to remember that these files are simply sh.e.l.l scripts that wrap the various options accepted by the different daemons on Linux. Not all Linux daemons recognize the command-line arguments stop stop, start start, etc., but the scripts in /etc/init.d /etc/init.d make it easy to manage your running daemons by standardizing the commands that you use to control them. make it easy to manage your running daemons by standardizing the commands that you use to control them.

The directories /etc/rc0.d /etc/rc0.d through through /etc/rc6.d /etc/rc6.d The initialization scripts in /etc/init.d /etc/init.d are not directly executed by the are not directly executed by the init init process. Instead, each of the directories process. Instead, each of the directories /etc/rc0.d /etc/rc0.d through through /etc/rc6.d /etc/rc6.d contains symbolic (soft) links to the scripts in the contains symbolic (soft) links to the scripts in the /etc/init.d /etc/init.d directory. (These symbolic links could also be files, but using script files in each of the directories would be an administrative headache, because changes to any of the startup scripts would mean identical edits to multiple files.) When the directory. (These symbolic links could also be files, but using script files in each of the directories would be an administrative headache, because changes to any of the startup scripts would mean identical edits to multiple files.) When the init init process enters runlevel process enters runlevel N N, it examines all of the links in the a.s.sociated rc rcN.d directory. These links are given special names in the forms of directory. These links are given special names in the forms of K KNNname and and S SNNname, described as follows:K and and S S prefixes prefixesThese letters stand for kill kill and and start start, respectively. Each runlevel defines a state in which certain services are running and all others are not. The S S prefix is used to mark files for all services that are to be running (started) for the runlevel. The prefix is used to mark files for all services that are to be running (started) for the runlevel. The K K prefix is used for all other services, which should not be running. prefix is used for all other services, which should not be running.NNSequence number. This part of the link name is a two-digit integer (with a leading zero, if necessary). It specifies the relative order for services to be started or stopped. The lowest number represents the first script executed by init init, and the largest number is the last. There are no hard-and-fast rules for choosing these numbers, but it is important when adding a new service to be sure that it starts after any other required services are already running. If two services have an identical start order number, the order is indeterminate but probably alphabetical.nameBy convention, the name of the script being linked to. init init does not use this name, but including it makes maintenance easier for human readers. does not use this name, but including it makes maintenance easier for human readers.As an example, when init init enters the default runlevel (3 for the sake of this example) at boot time, all of the links with the enters the default runlevel (3 for the sake of this example) at boot time, all of the links with the K K and and S S prefixes in prefixes in /etc/rc3.d /etc/rc3.d will be executed in the order given by their sequence number ( will be executed in the order given by their sequence number (S10network, S12syslog S12syslog, and so on). Links that start with S S will be run with the single argument will be run with the single argument start start to launch their respective services, and links that start with to launch their respective services, and links that start with K K will be run with the single argument will be run with the single argument stop stop to stop the respective service. Since K comes before S alphabetically, the to stop the respective service. Since K comes before S alphabetically, the K K services are stopped before the services are stopped before the S S services are started. After the last of the scripts is executed, the requirements for runlevel 3 are satisfied. services are started. After the last of the scripts is executed, the requirements for runlevel 3 are satisfied.

Setting the Default Runlevel To determine the default runlevel at boot time, init init reads the configuration file reads the configuration file /etc/inittab /etc/inittab looking for a line containing the word looking for a line containing the word initdefault initdefault, which will look like this: id:N:initdefault: In the preceding, N N is a valid runlevel number, such as 3. This number is used as the default runlevel by init. The is a valid runlevel number, such as 3. This number is used as the default runlevel by init. The S S scripts in the corresponding scripts in the corresponding /etc/rcN.d /etc/rcN.d directory are executed to start their respective services. If you change the default runlevel for your system, it will most likely be in order to switch between the standard text login runlevel and the GUI login runlevel. In any case, never change the default runlevel to 0 or 6, or your system will not boot to a usable state. directory are executed to start their respective services. If you change the default runlevel for your system, it will most likely be in order to switch between the standard text login runlevel and the GUI login runlevel. In any case, never change the default runlevel to 0 or 6, or your system will not boot to a usable state.

Determining Your System"s Runlevel From time to time, you might be unsure just what runlevel your system is in. For example, you may have logged into a Linux system from a remote location and not know how it was booted or maintained. You may also need to know what runlevel your system was in prior to its current runlevel-perhaps wondering if the system was last in single-user mode for maintenance.

To determine this information, use the runlevel runlevel command. It displays the previous and current runlevel as integers, separated by a s.p.a.ce, on standard output. If no runlevel change has occurred since the system was booted, the previous runlevel is displayed as the letter command. It displays the previous and current runlevel as integers, separated by a s.p.a.ce, on standard output. If no runlevel change has occurred since the system was booted, the previous runlevel is displayed as the letter N N. For a system that was in runlevel 3 and is now in runlevel 5, the output is: #runlevel 35 For a system with a default runlevel of 5 that has just completed booting, the output would be: #runlevel N5 runlevel does not alter the system runlevel. To do this, use the does not alter the system runlevel. To do this, use the init init command (or the historical alias command (or the historical alias telinit telinit).

Changing runlevels with init and telinit The init init or or telinit telinit command sends signals to the executing command sends signals to the executing init init process, instructing it to change to a specified runlevel. You must be logged in as the superuser to use the process, instructing it to change to a specified runlevel. You must be logged in as the superuser to use the init init command. command.

Generally, you will use a runlevel change for the following reasons: To shut down the system using runlevel 0 To go to single-user mode using runlevel 1 To reboot the system using runlevel 6

Name init Syntax initn Description The command puts the system into the specified runlevel, n n, which can be an integer from 1 through 6. init init also supports also supports S S and and s s, which are equivalent to runlevel 1, and q q, which tells init init to reread its configuration file, to reread its configuration file, /etc/inittab /etc/inittab.

Examples Shut down immediately: #init0 Reboot immediately: #init6 Go to single-user mode immediately: #init1 or: #inits telinit The telinit telinit command may be used in place of command may be used in place of init init. telinit telinit is simply a link to is simply a link to init init, and the two may be used interchangeably.

System shutdown with shutdown When shutdown is initiated, all users who are logged into terminal sessions are notified that the system is going down. In addition, further logins are blocked to prevent new users from entering the system as it is being shut down.

Syntax shutdown[options]time[warning_message]

Description The shutdown shutdown command brings the system down in a secure, planned manner. By default, it takes the system to single-user mode. Options can be used to halt or reboot the system instead. The command internally uses command brings the system down in a secure, planned manner. By default, it takes the system to single-user mode. Options can be used to halt or reboot the system instead. The command internally uses init init with an appropriate runlevel argument to affect the system change. with an appropriate runlevel argument to affect the system change.

The mandatory time time argument tells the shutdown command when to initiate the shutdown procedure. It can be a time of day in the form argument tells the shutdown command when to initiate the shutdown procedure. It can be a time of day in the form hh:m hh:m, or it can take the form + +n, where n n is a number of minutes to wait. is a number of minutes to wait. time time can also be the word can also be the word now now, in which case the shutdown proceeds immediately.

warning_message is sent to the terminals of all users to alert them that the shutdown will take place. If the is sent to the terminals of all users to alert them that the shutdown will take place. If the time time specified is more than 15 minutes away, the command waits until 15 minutes remain before shutdown to make its first announcement. No quoting is necessary in specified is more than 15 minutes away, the command waits until 15 minutes remain before shutdown to make its first announcement. No quoting is necessary in warning_message warning_message unless the message includes special characters such as unless the message includes special characters such as * * or or " ".

Frequently used options -f Fast boot; this skips the filesystem checks on the next boot.

-h Halt after shutdown.

-k Don"t really shut down, but send the warning messages anyway.

-r Reboot after shutdown.

-F Force filesystem checks on the next boot.

Examples To reboot immediately (not recommended on a system with human users, because they will have no chance to save their work): #shutdown-rnow To reboot in five minutes with a maintenance message: #shutdown-r+5Systemmaintenanceisrequired To halt the system just before midnight tonight: #shutdownh23:59 Following are the two most common uses of shutdown by people who are on single-user systems: #shutdownhnow and: #shutdownrnow These cause an immediate halt or reboots, respectively.

Although it"s not really a bug, the shutdown shutdown manpage notes that omission of the required manpage notes that omission of the required time time argument yields unusual results. If you forget the argument yields unusual results. If you forget the time time argument, the command will probably exit without an error message. This might lead you to believe that a shutdown is starting, so it"s important to use the correct syntax. argument, the command will probably exit without an error message. This might lead you to believe that a shutdown is starting, so it"s important to use the correct syntax.

On the ExamYou need to be familiar with the default runlevels and the steps that the init process goes through in switching between them.

Chapter5.Linux Installation and Package Management (Topic 102)

Many resources, such as the book Running Linux (O"Reilly), describe Linux installation. This section of the test does not cover the installation of any particular Linux distribution; rather, its Objectives focus on four installation Topics and packaging tools. (O"Reilly), describe Linux installation. This section of the test does not cover the installation of any particular Linux distribution; rather, its Objectives focus on four installation Topics and packaging tools.

Objective 1: Design Hard Disk Layout This Objective covers the ability to design a disk part.i.tioning scheme for a Linux system. The Objective includes allocating filesystems or swap s.p.a.ce to separate part.i.tions or disks and tailoring the design to the intended use of the system. It also includes placing /boot /boot on a part.i.tion that conforms with the BIOS"s requirements for booting. Weight: 2. on a part.i.tion that conforms with the BIOS"s requirements for booting. Weight: 2.

Objective 2: Install a Boot Manager An LPIC 1 candidate should be able to select, install, and configure a boot manager. This Objective includes providing alternative boot locations and backup boot options using either LILO or GRUB. Weight: 2.

Objective 3: Manage Shared Libraries This Objective includes being able to determine the shared libraries that executable programs depend on and install them when necessary. The Objective also includes stating where system libraries are kept. Weight: 1.

Objective 4: Use Debian Package Management This Objective indicates that candidates should be able to perform package management on Debian-based systems. This indication includes using both command-line and interactive tools to install, upgrade, or uninstall packages, as well as find packages containing specific files or software. Also included is obtaining package information such as version, content, dependencies, package integrity, and installation status. Weight: 3.

Objective 5: Use Red Hat Package Manager (RPM) An LPIC 1 candidate should be able to use package management systems based on RPM. This Objective includes being able to install, reinstall, upgrade, and remove packages as well as obtain status and version information on packages. Also included is obtaining package version, status, dependencies, integrity, and signatures. Candidates should be able to determine what files a package provides as well as find which package a specific file comes from. Weight: 3.

Objective 1: Design a Hard Disk Layout Part of the installation process for Linux is designing the hard disk part.i.tioning scheme. If you"re used to systems that reside on a single part.i.tion, this step may seem to complicate the installation. However, there are advantages to splitting the filesystem into multiple part.i.tions and even onto multiple disks.

You can find more details about disks, part.i.tions, and Linux filesystem top-level directories in Chapter7 Chapter7. This Topic covers considerations for implementing Linux disk layouts.

System Considerations A variety of factors influence the choice of a disk layout plan for Linux, including: The amount of disk s.p.a.ce The size of the system What the system will be used for How and where backups will be performed Limited disk s.p.a.ce Filesystems and part.i.tions holding user data should be maintained with a maximum amount of free s.p.a.ce to accommodate user activity. When considering the physical amount of disk s.p.a.ce available, the system administrator may be forced to make a trade-off between the number of part.i.tions in use and the availability of free disk s.p.a.ce. Finding the right configuration depends on system requirements and available filesystem resources.

When disk s.p.a.ce is limited, you may opt to reduce the number of part.i.tions, thereby combining free s.p.a.ce into a single contiguous pool. For example, installing Linux on a PC with only 1 GB of available disk s.p.a.ce might best be implemented using only a few part.i.tions: /boot 50 MB. A small /boot /boot filesystem in the first part.i.tion ensures that all kernels are below the 1024-cylinder limit for older kernels and BIOS. filesystem in the first part.i.tion ensures that all kernels are below the 1024-cylinder limit for older kernels and BIOS.

/ 850 MB. A large root part.i.tion holds everything on the system that"s not in /boot /boot.

swap 100 MB.

Larger systems On larger platforms, functional issues such as backup strategies and required filesystem sizes can dictate disk layout. For example, suppose a file server is to be constructed serving 100 GB of executable data files to end users via NFS. Such a system will need enough resources to compartmentalize various parts of the directory tree into separate filesystems and might look like this: /boot 100 MB. Keep kernels under the 1024-cylinder limit.

swap 1 GB, depending on RAM.

/ 500 MB (minimum).

/usr 4 GB. All of the executables in /usr /usr are shared to workstations via read-only NFS. are shared to workstations via read-only NFS.

/var 2 GB. Since log files are in their own part.i.tion, they won"t threaten system stability if the filesystem is full.

/tmp 500 MB. Since temporary files are in their own part.i.tion, they won"t threaten system stability if the filesystem is full.

/home 90 GB. This big part.i.tion takes up the vast bulk of available s.p.a.ce, offered to users for their home directories and data.

On production servers, much of the system is often placed on redundant media, such as mirrored disks. Large filesystems, such as /home /home, may be stored on some form of disk array using a hardware controller.

Mount points Before you may access the various filesystem part.i.tions created on the storage devices, you first must list them in a filesystem table. This process is referred to as mounting mounting, and the directory you are mounting is called a mount point mount point. You must create the directories that you will use for mount points if they do not already exist. During system startup, these directories and mount points may be managed through the /etc/fstab /etc/fstab file, which contains the information about filesystems to mount when the system boots and the directories that are to be mounted. file, which contains the information about filesystems to mount when the system boots and the directories that are to be mounted.

Superblock A superblock is a block on each filesystem that contains metadata information about the filesystem layout. The information contained in the block includes the type, size, and status of the mounted filesystem. The superblock is the Linux/Unix equivalent to Microsoft systems" file allocation table (FAT), which contains the information about the blocks holding the top-level directory. Since the information about the filesystems is important, Linux filesystems keep redundant copies of the superblock that may be used to restore the filesystem should it become corrupt.

MBR The master boot record (MBR) is a very small program that contains information about your hard disk part.i.tions and loads the operating system. This program is located in the first sector of the hard disk and is 512 bytes. If this file becomes damaged, the operating system cannot boot. Therefore, it is important to back up the MBR so that you can replace a damaged copy if needed. To make a backup of the MBR from the hard drive and store a copy to your /home /home directory, use the directory, use the dd dd command. An example of such a backup command is: command. An example of such a backup command is: ddif=/dev/hdaof=~/mbr.txtcount=1bs=512 The preceding example a.s.sumes that your hard drive is /dev/hda /dev/hda. With this command you are taking one copy (count=1) consisting of 512 bytes (bs=512) from /dev/hda /dev/hda ( (if=/dev/hda) and copying it to a file named mbr.txt mbr.txt in in /home /home ( (of=~/mbr.txt).

If you need to restore the MBR, you may use the following command: ddif=~/mbr.txtof=/dev/hdacount=1bs=512 Booting from a USB device Linux may be booted from a Live USB, similar to booting from a Live CD. One difference between booting to the USB opposed to the CD is that the data on the USB device may be modified and stored back onto the USB device. When using a Live USB distribution of Linux, you can take your operating system, favorite applications, and data files with you wherever you go. This is also useful if you have problems and are not able to boot your computer for some reason. You may be able to boot the system using the Live USB and access the hard drive and troubleshoot the boot issue.

In order to boot from the USB device, you will need to make the USB device bootable. This requires setting up at least one part.i.tion on the USB with the bootable flag set to the primary part.i.tion. An MBR must also write to the primary part.i.tion on the USB. There are many applications that can be used to create live USB distributions of Linux, including Fedora Live USB Creator and Ubuntu Live USB Creator. The computer may also need the BIOS to be configured to boot from USB.

Some older computers may not have support in the BIOS to boot from a USB device. In this case it is possible to redirect the computer to load the operating system from the USB device by using an initial bootable CD. The bootable CD boots the computer, loads the necessary USB drivers into memory, and then locates and loads the filesystem from the USB device.

System role The role of the system should also dictate the optimal disk layout. In a traditional Unix-style network with NFS file servers, most of the workstations won"t necessarily need all of their own executable files. In the days when disk s.p.a.ce was at a premium, this represented a significant savings in disk s.p.a.ce. Although s.p.a.ce on workstation disks isn"t the problem it once was, keeping executables on a server still eliminates the administrative headache of distributing updates to workstations.

Backup Some backup schemes use disk part.i.tions as the basic unit of system backup. In such a scenario, each of the filesystems listed in /etc/fstab /etc/fstab is backed up separately, and they are arranged so that each filesystem fits within the size of the backup media. For this reason, the available backup device capabilities can play a role in determining the ultimate size of part.i.tions. is backed up separately, and they are arranged so that each filesystem fits within the size of the backup media. For this reason, the available backup device capabilities can play a role in determining the ultimate size of part.i.tions.

Using the dd dd command as discussed earlier, you can back up each of the individual part.i.tions. The command may also be used to back up the entire hard drive. To back up a hard drive to another hard drive, you would issue the following command, where command as discussed earlier, you can back up each of the individual part.i.tions. The command may also be used to back up the entire hard drive. To back up a hard drive to another hard drive, you would issue the following command, where if=/dev/hdx if=/dev/hdx represents the hard drive you want to back up and represents the hard drive you want to back up and of=/dev/hyd of=/dev/hyd represents the target or destination drive of the backup: represents the target or destination drive of the backup: ddif=/dev/hdxof=/dev/hyd If you are just interested in making a backup of the part.i.tion layout, you can also use the sfdisk sfdisk command to create a copy of the part.i.tion table: command to create a copy of the part.i.tion table: sfdisk-d/dev/hda>part.i.tion_backup.txt Then, if you need to restore the part.i.tion table, you can use the sfdisk sfdisk command again: command again: sfdisk/dev/hda

General Guidelines Here are some guidelines for part.i.tioning a Linux system: Keep the root filesystem (/) simple by distributing larger portions of the directory tree to other part.i.tions. A simplified root filesystem is less likely to be corrupted.

Separate a small /boot /boot part.i.tion below cylinder 1024 for installed kernels used by the system boot loader. This does not apply to newer BIOS and kernels (e.g., 2.6.20). part.i.tion below cylinder 1024 for installed kernels used by the system boot loader. This does not apply to newer BIOS and kernels (e.g., 2.6.20).

Separate /var /var. Make certain it is big enough to handle your logs, spools, and mail, taking their rotation and eventual deletion into account.

Separate /tmp /tmp. Its size depends on the demands of the applications you run. It should be large enough to handle temporary files for all of your users simultaneously.

Separate /usr /usr and make it big enough to accommodate kernel building. Making it standalone allows you to share it read-only via NFS. and make it big enough to accommodate kernel building. Making it standalone allows you to share it read-only via NFS.

Separate /home /home for machines with multiple users or any machine where you don"t want to affect data during distribution software upgrades. For even better performance (for multiuser environments), put for machines with multiple users or any machine where you don"t want to affect data during distribution software upgrades. For even better performance (for multiuser environments), put /home /home on a disk array and use Logical Volume manager (LVM). on a disk array and use Logical Volume manager (LVM).

Set swap s.p.a.ce to at least the same size (twice the size is recommended) as the main memory.

On the ExamSince a disk layout is the product of both system requirements and available resources, no single example can represent the best configuration. Factors to remember include placing the old 2.2.x kernel below cylinder 1024, effectively utilizing multiple disks, sizing part.i.tions to hold various directories such as /var /var and and /usr /usr, and the importance of the root filesystem and swap s.p.a.ce size.

Objective 2: Install a Boot Manager Although it is possible to boot Linux from a floppy disk, most Linux installations boot from the computer"s hard disk. This is a two-step process that begins after the system BIOS is initialized and ready to run an operating system. Starting Linux consists of the following two basic phases: Run the boot loader from the boot device It is the boot manager"s job to find the selected kernel and get it loaded into memory, including any user-supplied options.

Launch the Linux kernel and start processes Your boot loader starts the specified kernel. The boot loader"s job at this point is complete and the hardware is placed under the control of the running kernel, which sets up shop and begins running processes.

All Linux systems require some sort of boot loader, whether it"s simply bootstrap code on a floppy disk or an application such as LILO or GRUB. Because the popularity of GRUB has grown, LPI has added it to the second release of the 101 exams.

LILO The LILO is a small utility designed to load the Linux kernel (or the boot sector of another operating system) into memory and start it. A program that performs this function is commonly called a boot loader. LILO consists of two parts: The boot loader This is a two-stage program intended to find and load a kernel. It"s a two-stage operation because the boot sector of the disk is too small to hold the entire boot loader program. The code located in the boot sector is compact because its only function is to launch the second stage, which is the interactive portion. The first stage resides in the MBR or first boot part.i.tion of the hard disk. This is the code that is started at boot time by the system BIOS. It locates and launches a second, larger stage of the boot loader that resides elsewhere on disk. The second stage offers a user prompt to allow boot-time and kernel image selection options, finds the kernel, loads it into memory, and launches it.

The lilo command Also called the map installer map installer, the lilo lilo command is used to install and configure the LILO boot loader. The command reads a configuration file that describes where to find kernel images, video information, the default boot disk, and so on. It encodes this information along with physical disk information and writes it in files for use by the boot loader. command is used to install and configure the LILO boot loader. The command reads a configuration file that describes where to find kernel images, video information, the default boot disk, and so on. It encodes this information along with physical disk information and writes it in files for use by the boot loader.

The boot loader When the system BIOS launches, LILO presents you with the following prompt: LILO: The LILO LILO prompt is designed to allow you to select from multiple kernels or operating systems installed on the computer and to pa.s.s parameters to the kernel when it is loaded. Pressing the Tab key at the LILO prompt yields a list of available kernel images. One of the listed images will be the default as designated by an asterisk next to the name: prompt is designed to allow you to select from multiple kernels or operating systems installed on the computer and to pa.s.s parameters to the kernel when it is loaded. Pressing the Tab key at the LILO prompt yields a list of available kernel images. One of the listed images will be the default as designated by an asterisk next to the name: LILO: linux*linux_586_smpexperimental Under many circ.u.mstances, you won"t need to select a kernel at boot time because LILO will boot the kernel configured as the default during the install process. However, if you later create a new kernel, have special hardware issues, or are operating your system in a dual-boot configuration, you may need to use some of LILO"s options to load the kernel or operating system you desire.

The LILO map installer and its configuration file Before any boot sequence can complete from your hard disk, the boot loader and a.s.sociated information must be installed by the LILO map installer utility. The lilo lilo command writes the portion of LILO that resides to the MBR, customized for your particular system. Your installation program creates a correct MBR, but you"ll have to repeat the command manually if you build a new kernel yourself. command writes the portion of LILO that resides to the MBR, customized for your particular system. Your installation program creates a correct MBR, but you"ll have to repeat the command manually if you build a new kernel yourself.

LILO locations During installation, LILO can be placed either in the boot sector of the disk or in your root part.i.tion. If the system is intended as a Linux-only system, you won"t need to worry about other boot loaders, and LILO can safely be placed into the boot sector. However, if you"re running another operating system you should place its boot loader in the boot sector. Multiple-boot and multiple-OS configurations are beyond the scope of the LPIC Level 1 exams.

On the ExamIt is important to understand the distinction between lilo lilo, the map installer utility run interactively by the system administrator, and the boot loader, which is launched by the system BIOS at boot time. Both are parts of the LILO package.

GRUB GRUB is a multistage boot loader, much like LILO. It is much more flexible than LILO, as it includes support for booting arbitrary kernels on various filesystem types and for booting several different operating systems. Changes take effect at once, without the need for a command execution.

GRUB device naming GRUB refers to disk devices as follows: (xdn[,m]) The x xd in this example will be either in this example will be either fd fd or or hd hd-floppy disk or or hard disk hard disk, respectively. The n n refers to the number of the disk as seen by the BIOS, starting at refers to the number of the disk as seen by the BIOS, starting at 0 0. The optional , ,m denotes the part.i.tion number, also starting at denotes the part.i.tion number, also starting at 0 0.

The following are examples of valid GRUB device names: (fd0) The first floppy disk (hd0) The first hard disk (hd0,1) The second part.i.tion on the first hard disk Note that GRUB does not distinguish between IDE and SCSI/SATA disks. It refers only to the order of the disks as seen by the BIOS, which means that the device number that GRUB uses for a given disk will change on a system with both IDE and SCSI/SATA if the boot order is changed in the BIOS.

© 2024 www.topnovel.cc