Name mkdir Syntax mkdir[options]directories Description Create one or more directories directories. You must have write permission in the directory where directories directories are to be created. are to be created.
Frequently used options -mmode Set the access rights in the octal format mode mode for for directories directories.
-p Create intervening parent directories if they don"t exist.
Examples Create a read-only directory named personal personal: $mkdir-m444personal Create a directory tree in your home directory, as indicated with a leading tilde (~), using a single command: $mkdir-p~/dir1/dir2/dir3 In this case, all three directories are created. This is faster than creating each directory individually.
On the ExamVerify your understanding of the tilde (~) shortcut for the home directory, and the shortcuts . . (for the current directory) and (for the current directory) and .. .. (for the parent directory). (for the parent directory).
Name mv Syntax mv[options]sourcetarget Description Move or rename files and directories. For targets targets on the same filesystem (part.i.tion), moving a file doesn"t relocate the contents of the file itself. Rather, the directory entry for the target is updated with the new location. For on the same filesystem (part.i.tion), moving a file doesn"t relocate the contents of the file itself. Rather, the directory entry for the target is updated with the new location. For targets targets on different filesystems, such a change can"t be made, so files are copied to the target location and the original sources are deleted. on different filesystems, such a change can"t be made, so files are copied to the target location and the original sources are deleted.
If a target file or directory does not exist, source source is renamed to is renamed to target target. If a target target file already exists, it is overwritten with file already exists, it is overwritten with source source. If target target is an existing directory, is an existing directory, source source is moved into that directory. If is moved into that directory. If source source is one or more files and is one or more files and target target is a directory, the files are moved into the directory. is a directory, the files are moved into the directory.
Frequently used options -f Force the move even if target target exists, suppressing warning messages. exists, suppressing warning messages.
-i Query interactively before moving files.
Name rm Syntax rm[options]files Description Delete one or more files from the filesystem. To remove a file, you must have write permission in the directory that contains the file, but you do not need write permission on the file itself. The rm rm command also removes directories when the command also removes directories when the -d, -r -d, -r, or -R -R option is used. option is used.
Frequently used options -d Remove directories even if they are not empty. This option is reserved for privileged users.
-f Force removal of write-protected files without prompting.
-i Query interactively before removing files.
-r,-R If the file is a directory, recursively remove the entire directory and all of its contents, including subdirectories.
Name rmdir Syntax rmdir[option]directories Description Delete directories directories, which must be empty.
Frequently used option -p Remove directories directories and any intervening parent directories that become empty as a result. This is useful for removing subdirectory trees. and any intervening parent directories that become empty as a result. This is useful for removing subdirectory trees.On the ExamRemember that recursive remove using rm R rm R removes directories too, even if they"re not empty. Beware the dreaded removes directories too, even if they"re not empty. Beware the dreaded rm Rf / rm Rf /, which will remove your entire filesystem!
Name touch Syntax touch[options]files Description Change the access and/or modification times of files files. This command is used to refresh timestamps on files. Doing so may be necessary, for example, to cause a program to be recompiled using the date-dependent make make utility. utility.
Frequently used options -a Change only the access time.
-m Change only the modification time.
-t timestamp timestamp Instead of the current time, use timestamp timestamp in the form of in the form of [[CC]YY]MMDDhhmm[.ss] [[CC]YY]MMDDhhmm[.ss]. For example, the timestamp for January 12, 2001, at 6:45 p.m. is 200101121845.
Objective 4: Use Streams, Pipes, and Redirects Among the many beauties of Linux and Unix systems is the notion that everything is a file everything is a file. Things such as disk drives and their part.i.tions, tape drives, terminals, serial ports, the mouse, and even audio are mapped into the filesystem. This mapping allows programs to interact with many different devices and files in the same way, simplifying their interfaces. Each device that uses the file metaphor is given a device file device file, which is a special object in the filesystem that provides an interface to the device. The kernel a.s.sociates device drivers with various device files, which is how the system manages the illusion that devices can be accessed as if they were files. Using a terminal as an example, a program reading from the terminal"s device file will receive characters typed at the keyboard. Writing to the terminal causes characters to appear on the screen. Although it may seem odd to think of your terminal as a file, the concept provides a unifying simplicity to Linux and Linux programming.
Standard I/O and Default File Descriptors Standard I/O is a capability of the sh.e.l.l, used with all text-based Linux utilities to control and direct program input, output, and error information. When a program is launched, it is automatically provided with three is a capability of the sh.e.l.l, used with all text-based Linux utilities to control and direct program input, output, and error information. When a program is launched, it is automatically provided with three file descriptors file descriptors. File descriptors are regularly used in programming and serve as a "handle" of sorts to another file. We have mentioned these already in our discussion of text streams and "piping" together programs on the command line. Standard I/O creates the following file descriptors: Standard input (abbreviated stdin) This file descriptor is a text input stream. By default it is attached to your keyboard. When you type characters into an interactive text program, you are feeding them to standard input. As you"ve seen, some programs take one or more filenames as command-line arguments and ignore standard input. Standard input is also known as file descriptor 0 file descriptor 0.
Standard output (abbreviated stdout) This file descriptor is a text output stream for normal program output. By default it is attached to your terminal (or terminal window). Output generated by commands is written to standard output for display. Standard output is also known as file descriptor 1 file descriptor 1.
Standard error (abbreviated stderr) This file descriptor is also a text output stream, but it is used exclusively for errors or other information unrelated to the successful results of your command. By default, standard error is attached to your terminal just like standard output. This means that standard output and standard error are commingled in your display, which can be confusing. You"ll see ways to handle this later in this section. Standard error is also known as file descriptor 2 file descriptor 2.
Standard output and standard error are separated because it is often useful to process normal program output differently from errors.
The standard I/O file descriptors are used in the same way as those created during program execution to read and write disk files. They enable you to tie commands together with files and devices, managing command input and output in exactly the way you desire. The difference is that they are provided to the program by the sh.e.l.l by default and do not need to be explicitly created.
Pipes From a program"s point of view there is no difference between reading text data from a file and reading it from your keyboard. Similarly, writing text to a file and writing text to a display are equivalent operations. As an extension of this idea, it is also possible to tie the output of one program to the input of another. This is accomplished using a pipe pipe symbol ( symbol (|) to join two or more commands together, which we have seen some examples of already in this chapter. For example: $grep"01523"order*|less This command searches through all files whose names begin with order order to find lines containing the word to find lines containing the word 01523 01523. By creating this pipe, the standard output of grep grep is sent to the standard input of is sent to the standard input of less less. The mechanics of this operation are handled by the sh.e.l.l and are invisible to the user. Pipes can be used in a series of many commands. When more than two commands are put together, the resulting operation is known as a pipeline pipeline or or text stream text stream, implying the flow of text from one command to the next.
As you get used to the idea, you"ll find yourself building pipelines naturally to extract specific information from text data sources. For example, suppose you wish to view a sorted list of inode numbers from among the files in your current directory. There are many ways you could achieve this. One way would be to use awk awk in a pipeline to extract the inode number from the output of in a pipeline to extract the inode number from the output of ls ls, then send it on to the sort sort command and finally to a pager for viewing (don"t worry about the syntax or function of these commands at this point): command and finally to a pager for viewing (don"t worry about the syntax or function of these commands at this point): $ls-i*|awk"{print$1}"|sort-nu|less The pipeline concept in particular is a feature of Linux and Unix that draws on the fact that your system contains a diverse set of tools for operating on text. Combining their capabilities can yield quick and easy ways to extract otherwise hard-to-handle information. This is embodied in the historical "Unix Philosophy": Write programs that do one thing and do it well.
Write programs to work together.
Write programs to handle text streams, because that is a universal interface.
Redirection Each pipe symbol in the previous pipeline example instructs the sh.e.l.l to feed output from one command into the input of another. This action is a special form of redirection redirection, which allows you to manage the origin of input streams and the destination of output streams. In the previous example, individual programs are unaware that their output is being handed off to or from another program because the sh.e.l.l takes care of the redirection on their behalf.
Redirection can also occur to and from files. For example, rather than sending the output of an inode list to the pager less less, it could easily be sent directly to a file with the > > redirection operator: redirection operator: $ls-i*|awk"{print$1}"|sort-nu>in.txt When you change the last redirection operator, the sh.e.l.l creates an empty file (in.txt) and opens it for writing, and the standard output of sort sort places the results in the file instead of on the screen. Note that, in this example, anything sent to standard error is still displayed on the screen. In addition, if your specified file, places the results in the file instead of on the screen. Note that, in this example, anything sent to standard error is still displayed on the screen. In addition, if your specified file, in.txt in.txt, already existed in your current directory, it would be overwritten.
Since the > > redirection operator redirection operator creates creates files, the files, the >> >> redirection operator can be used to append to existing files. For example, you could use the following command to append a one-line footnote to redirection operator can be used to append to existing files. For example, you could use the following command to append a one-line footnote to in.txt in.txt: $echo"endoflist">>in.txt Since in.txt in.txt already exists, the quote will be appended to the bottom of the existing file. If the file didn"t exist, the already exists, the quote will be appended to the bottom of the existing file. If the file didn"t exist, the >> >> operator would create the file and insert the text "end of list" as its contents. operator would create the file and insert the text "end of list" as its contents.
It is important to note that when creating files, the output redirection operators are interpreted by the sh.e.l.l before before the commands are executed. This means that any output files created through redirection are opened first. For this reason you cannot modify a file in place, like this: the commands are executed. This means that any output files created through redirection are opened first. For this reason you cannot modify a file in place, like this: $grep"stuff"file1>file1 If file1 file1 contains something of importance, this command would be a disaster because an empty contains something of importance, this command would be a disaster because an empty file1 file1 would overwrite the original. The would overwrite the original. The grep grep command would be last to execute, resulting in a complete data loss from the original command would be last to execute, resulting in a complete data loss from the original file1 file1 file because the file that replaced it was empty. To avoid this problem, simply use an intermediate file and file because the file that replaced it was empty. To avoid this problem, simply use an intermediate file and then then rename it: rename it: $grep"stuff"file1>file2 $mvfile2file1 Standard input can also be redirected, using the redirection operator <><. a="" about="" at="" but="" can="" care="" command="" contents="" don="" easily="" example="" file="" first="" following="" for="" in.txt="" input="" input.="" jdean="" jdean:="" keyboard="" mail="" may="" message="" odd="" of="" originate="" other="" program="" programs="" redirect="" seem="" send="" since="" source="" standard="" streams="" text="" than="" the="" their="" to="" user="" using="" where="" will="" with="" you="">
Table6-6.Standard I/O redirections for the bash sh.e.l.l
Redirection function Syntax for bash Send stdout to to file file.
$cmd>file $cmd1>file Send stderr to to file file.
$cmd2>file Send both stdout and and stderr stderr to to file file.
$cmd>file2>&1 Send stdout to to file1 file1 and and stderr stderr to to file2 file2.
$cmd>file12>file2 Receive stdin from from file file.
$cmd
$cmd>>file $cmd1>>file Append stderr to to file file.
$cmd2>>file Append both stdout and and stderr stderr to to file file.
$cmd>>file2>&1 Pipe stdout from from cmd1 cmd1 to to cmd2 cmd2.
$cmd1|cmd2 Pipe stdout and and stderr stderr from from cmd1 cmd1 to to cmd2 cmd2.
$cmd12>&1|cmd2
On the ExamBe prepared to demonstrate the difference between filenames and command names in commands using redirection operators. Also, check the syntax on commands in redirection questions to be sure about which command or file is a data source and which is a destination.
Using the tee Command Sometimes you"ll want to run a program and send its output to a file while at the same time viewing the output on the screen. The tee tee utility is helpful in this situation. utility is helpful in this situation.
The xargs Command Sometimes you need to pa.s.s a list of items to a command that is longer than your sh.e.l.l can handle. In these situations, the xargs xargs command can be used to break down the list into smaller sublists. command can be used to break down the list into smaller sublists.
Name tee Syntax tee[options]files Description Read from standard input and write both to one or more files files and to standard output (a.n.a.logous to a tee junction in a pipe). and to standard output (a.n.a.logous to a tee junction in a pipe).
Option -a Append to files files rather than overwriting them. rather than overwriting them.
Example Suppose you"re running a pipeline of commands cmd1 cmd1, cmd2 cmd2, and cmd3 cmd3: $cmd1|cmd2|cmd3>file1 This sequence puts the ultimate output of the pipeline into file1 file1. However, you may also be interested in the intermediate result of cmd1 cmd1. To create a new file_cmd1 file_cmd1 containing those results, use containing those results, use tee tee: $cmd1|teefile_cmd1|cmd2|cmd3>file1 The results in file1 file1 will be the same as in the original example, and the intermediate results of will be the same as in the original example, and the intermediate results of cmd1 cmd1 will be placed in will be placed in file_cmd1 file_cmd1.
Name xargs Syntax xargs[options][command][initial-arguments]
Description Execute command command followed by its optional followed by its optional initial-arguments initial-arguments and append additional arguments found on standard input. Typically, the additional arguments are filenames in quant.i.ties too large for a single command line. and append additional arguments found on standard input. Typically, the additional arguments are filenames in quant.i.ties too large for a single command line. xargs xargs runs runs command command multiple times to exhaust all arguments on standard input. multiple times to exhaust all arguments on standard input.
Frequently used options -n maxargs maxargs Limit the number of additional arguments to maxargs maxargs for each invocation of for each invocation of command command.
-p Interactive mode. Prompt the user for each execution of command command.
Example Use grep grep to search a long list of files, one by one, for the word "linux": to search a long list of files, one by one, for the word "linux": $find/-typef|xargs-n1grep-Hlinux find searches for normal files ( searches for normal files (-type f) starting at the root directory. xargs xargs executes executes grep grep once for each of them due to the once for each of them due to the -n 1 -n 1 option. option. grep grep will print the matching line preceded by the filename where the match occurred (due to the will print the matching line preceded by the filename where the match occurred (due to the -H -H option). option).
Objective 5: Create, Monitor, and Kill Processes This Objective looks at the management of processes processes. Just as file management is a fundamental system administrator"s function, the management and control of processes is also essential for smooth system operation. In most cases, processes will live, execute, and die without intervention from the user because they are automatically managed by the kernel. However, there are times when a process will die for some unknown reason and need to be restarted. Or a process may "run wild" and consume system resources, requiring that it be terminated. You will also need to instruct running processes to perform operations, such as rereading a configuration file.
Processes Every program, whether it"s a command, application, or script, that runs on your system is a process process. Your sh.e.l.l is a process, and every command you execute from the sh.e.l.l starts one or more processes of its own (referred to as child processes child processes). Attributes and concepts a.s.sociated with these processes include: Lifetime A process lifetime is defined by the length of time it takes to execute (while it "lives"). Commands with a short lifetime such as ls ls will execute for a very short time, generate results, and terminate when complete. User programs such as web browsers have a longer lifetime, running for unlimited periods of time until terminated manually. Long-lifetime processes include server daemons that run continuously from system boot to shutdown. When a process terminates, it is said to will execute for a very short time, generate results, and terminate when complete. User programs such as web browsers have a longer lifetime, running for unlimited periods of time until terminated manually. Long-lifetime processes include server daemons that run continuously from system boot to shutdown. When a process terminates, it is said to die die (which is why the program used to manually signal a process to stop execution is called (which is why the program used to manually signal a process to stop execution is called kill kill; succinct, though admittedly morbid).
Process ID (PID) Every process has a number a.s.signed to it when it starts. PIDs are integer numbers unique among all running processes.
User ID (UID) and Group ID (GID) Processes must have a.s.sociated privileges, and a process"s UID and GID are a.s.sociated with the user who started the process. This limits the process"s access to objects in the filesystem.
Parent process The first process started by the kernel at system start time is a program called init init. This process has PID 1 and is the ultimate parent of all other processes on the system. Your sh.e.l.l is a descendant of init init and the parent process to commands started by the sh.e.l.l, which are its and the parent process to commands started by the sh.e.l.l, which are its child child processes, or processes, or subprocesses subprocesses.
Parent process ID (PPID) This is the PID of the process that created the process in question.
Environment Each process holds a list of variables and their a.s.sociated values. Collectively, this list is known as the environment environment of the process, and the variables are called of the process, and the variables are called environment variables environment variables. Child processes inherit their environment settings from the parent process unless an alternative environment is specified when the program is executed.
Current working directory The default directory a.s.sociated with each process. The process will read and write files in this directory unless they are explicitly specified to be elsewhere in the filesystem.On the ExamThe parent/child relationship of the processes on a Linux system is important. Be sure to understand how these relationships work and how to view them. Note that the init init process always has PID 1 and is the ultimate ancestor of all system processes (hence the nickname "mother of all processes"). Also remember the fact that if a parent process is killed, all its children (subprocesses) die as well. process always has PID 1 and is the ultimate ancestor of all system processes (hence the nickname "mother of all processes"). Also remember the fact that if a parent process is killed, all its children (subprocesses) die as well.
Process Monitoring At any time, there could be tens or even hundreds of processes running together on your Linux system. Monitoring these processes is done using three convenient utilities: ps ps, pstree pstree, and top top.
Signaling Active Processes Each process running on your system listens for signals signals, simple messages sent to the process either by the kernel or by a user. The messages are sent through inter-process communication. They are single-valued, in that they don"t contain strings or command-like constructs. Instead, signals are numeric integer messages, predefined and known by processes. Most have an implied action for the process to take. When a process receives a signal, it can (or may be forced to) take action.
For example, if you are executing a program from the command line that appears to hang, you may elect to type Ctrl-C to abort the program. This action actually sends an SIGINT SIGINT (interrupt signal) to the process, telling it to stop running. (interrupt signal) to the process, telling it to stop running.
There are more than 32 signals defined for normal process use in Linux. Each signal has a name and a number (the number is sent to the process; the name is only for our convenience). Many signals are used by the kernel, and some are useful for users. Table6-7 Table6-7 lists popular signals for interactive use. lists popular signals for interactive use.
Table6-7.Frequently used interactive signals
Signal name[a]
Number Meaning and use HUP 1 Hang up. This signal is sent automatically when you log out or disconnect a modem. It is also used by many daemons to cause the configuration file to be reread without stopping the daemon process. Useful for things like an httpd server that normally reads its configuration file only when the process is started. A server that normally reads its configuration file only when the process is started. A SIGHUP SIGHUP signal will force it to reread the configuration file without the downtime of restarting the process. signal will force it to reread the configuration file without the downtime of restarting the process.
INT 2 Interrupt; stop running. This signal is sent when you type Ctrl-C.
KILL 9 Kill; stop unconditionally and immediately. Sending this signal is a drastic measure, as it cannot be ignored by the process. This is the "emergency kill" signal.
TERM 15 Terminate, nicely if possible. This signal is used to ask a process to exit gracefully, after its file handles are closed and its current processing is complete.
TSTP 20 Stop executing, ready to continue. This signal is sent when you type Ctrl-Z. (See the later section "Sh.e.l.l Job Control" "Sh.e.l.l Job Control" for more information.) for more information.) CONT 18 Continue execution. This signal is sent to start a process stopped by SIGTSTP or or SIGSTOP SIGSTOP. (The sh.e.l.l sends this signal when you use the fg fg or or bg bg commands after stopping a process with Ctrl-Z.) commands after stopping a process with Ctrl-Z.) [a] Signal names often will be specified with a SIG prefix. That is, signal HUP is the same as signal SIGHUP. Signal names often will be specified with a SIG prefix. That is, signal HUP is the same as signal SIGHUP.
As you can see from Table6-7 Table6-7, some signals are invoked by pressing well-known key combinations such as Ctrl-C and Ctrl-Z. You can also use the kill kill command to send any message to a running process. The command to send any message to a running process. The kill kill command is implemented both as a sh.e.l.l built-in command and as a standalone binary command. For a complete list of signals that processes can be sent, refer to the file command is implemented both as a sh.e.l.l built-in command and as a standalone binary command. For a complete list of signals that processes can be sent, refer to the file /usr/include/bits/signum.h /usr/include/bits/signum.h on your Linux install, which normally is installed with the on your Linux install, which normally is installed with the glibc-headers glibc-headers package. package.
Terminating Processes Occasionally, you"ll find a system showing symptoms of high CPU load or one that runs out of memory for no obvious reason. This often means an application has gone out of control on your system. You can use ps ps or or top top to identify processes that may be having a problem. Once you know the PID for the process, you can use the to identify processes that may be having a problem. Once you know the PID for the process, you can use the kill kill command to stop the process nicely with command to stop the process nicely with SIGTERM SIGTERM ( (kill -15 PID PID), escalating the signal to higher strengths if necessary until the process terminates.
NoteOccasionally you may see a process displayed by ps ps or or top top that is listed as a that is listed as a zombie zombie. These are processes that are stuck while trying to terminate and are appropriately said to be in the zombie state zombie state. Just as in the cult cla.s.sic film Night of the Living Dead Night of the Living Dead, you can"t kill zombies, because they"re already dead!If you have a recurring problem with zombies, there may be a bug in your system software or in an application.
Killing a process will also kill all of its child processes. For example, killing a sh.e.l.l will kill all the processes initiated from that sh.e.l.l, including other sh.e.l.ls.
Sh.e.l.l Job Control Linux and most modern Unix systems offer job control job control, which is the ability of your sh.e.l.l (with support of the kernel) to stop and restart executing commands, as well as place them in the background background where they can be executed. A program is said to be in the where they can be executed. A program is said to be in the foreground foreground when it is attached to your terminal. When executing in the background, you have no input to the process other than sending it signals. When a process is put in the background, you create a when it is attached to your terminal. When executing in the background, you have no input to the process other than sending it signals. When a process is put in the background, you create a job job. Each job is a.s.signed a job number, starting at 1 and numbering sequentially.
The basic reason to create a background process is to keep your sh.e.l.l session free. There are many instances when a long-running program will never produce a result from standard output or standard error, and your sh.e.l.l will simply sit idle waiting for the program to finish. Noninteractive programs can be placed in the background background by adding a by adding a & & character to the command. For example, if you start character to the command. For example, if you start firefox firefox from the command line, you don"t want the sh.e.l.l to sit and wait for it to terminate. The sh.e.l.l will respond by starting the web browser in the background and will give you a new command prompt. It will also issue the job number, denoted in square brackets, along with the PID. For example: from the command line, you don"t want the sh.e.l.l to sit and wait for it to terminate. The sh.e.l.l will respond by starting the web browser in the background and will give you a new command prompt. It will also issue the job number, denoted in square brackets, along with the PID. For example: $/usr/bin/firefox& [1]1748 Here, firefox firefox is started as a background process. Firefox is a.s.signed to job 1 (as denoted by is started as a background process. Firefox is a.s.signed to job 1 (as denoted by [1] [1]), and is a.s.signed PID 1748 1748. If you start a program and forget the & & character, you can still put it in the background by first typing Ctrl-Z to stop it: character, you can still put it in the background by first typing Ctrl-Z to stop it: ^Z [1]+Stoppedfirefox Then, issue the bg bg command to restart the job in the background: command to restart the job in the background: $bg [1]+/usr/bin/firefox& When you exit from a sh.e.l.l with jobs in the background, those processes may die. The utility nohup nohup can be used to protect the background processes from the hangup signal ( can be used to protect the background processes from the hangup signal (SIGHUP) that it might otherwise receive when the sh.e.l.l dies. This can be used to simulate the detached behavior of a system daemon.
Putting interactive programs in the background can be quite useful. Suppose you"re logged into a remote Linux system, running Emacs in text mode. Rather than exit from the editor when you need to drop back to the sh.e.l.l, you can simply press Ctrl-Z. This stops Emacs, puts it in the background, and returns you to a command prompt. When you are finished, you resume your Emacs session with the fg fg command, which puts your stopped job back into the foreground. command, which puts your stopped job back into the foreground.
Background jobs and their status can be listed by issuing the jobs jobs command. Stopped jobs can be brought to the foreground with the command. Stopped jobs can be brought to the foreground with the fg fg command and optionally placed into the background with the Ctrl-Z and command and optionally placed into the background with the Ctrl-Z and bg bg sequence. sequence.