Search the Catalog
Essential System Administration, 2nd Edition

Essential System Administration, 2nd Edition

By AEleen Frisch
2nd Edition September 1995
1-56592-127-5, Order Number: 1275
788 pages, $39.95

Chapter 3
Essential Administrative Tools

Getting Help
Piping into grep and awk
Finding Files
Repeating Commands
Creating Several Directory Levels at Once
Duplicating an Entire Directory Tree
Comparing Directories
Deleting Pesky Files
Starting at the End

The right tools make any job easier, and the lack of them can make some tasks almost impossible. When you need an Allen wrench, nothing other than an Allen wrench will really do. On the other hand, if you need a Phillips head screwdriver, you might be able to make do with a pocket knife, and occasionally it will even work better.

This chapter will consider ways the commands and utilities that provides can make system administration easier. Sometimes that will mean turning common user commands to administrative tasks, sometimes it will mean putting commands together in unexpected ways, and sometimes it will mean making smarter and more efficient use of familiar tools. And, once in a while, what will make your life easier is creating tools for users to use, so that they can handle some things for themselves. We'll look at this last topic in .

Getting Help

The man facility is the quintessentially approach to online help: superficially minimalist, often obscure, but mostly complete once you know your way around it.

Undoubtedly, the basics of the man command are familiar: getting help for a command, specifying a specific section, using -k (or apropos) to search for entries for a specific topic,[1] and so on.

[1] Not available on systems; in , we'll look at a script that fixes this deficiency.

There are a couple of man features that I didn't discover until I'd been working on systems for years (I'd obviously never bothered to run man man). The first is that you can request multiple manual pages within a single man command:

$ man umount fsck newfs

man will present the pages as separate files to the display program, and you can move among them using its normal method (for example, with :n in more).

On many systems as well as under , man also has a -a option, which retrieves the specified manual page(s) from every section of the manual. For example, the first command below will display the introductory manual page for every section for which one is available, and the second command will display the manual pages for both the chown command and system call:

$ man -a intro$ man -a chown

Changing the Search Order

The man command searches the various manual page sections in a predefined order: commands first, followed by system calls and library functions, and then the other sections (i.e., 1, 6, 8, 2, 3, 4, 5, 7 for -based schemes). The first manual page matching the one specified on the command line is displayed. In some cases, a different order might make more sense. and allow the search order to be customized.

They use the same mechanism for specifying the desired order: setting an option with the man facility configuration file. Under , you set ORDER in /etc/default/man, and under , you set MANSECTS in /usr/share/man/ In each case, you provide it with a list of sections, in the order that you want them to be searched. Here is an example from :


This ordering brings administrative command sections to the beginning of the list. Under , the list is comma rather than colon-separated.

Setting up man -k

It's probably worth mentioning how to get man -k to work if your system claims to support it, but nothing comes back when you use it.[2] This command (and its alias apropos) uses a data file listing all available manual pages (usually called /usr/share/man/whatis, although names the file windex). The file often must be initially created by the system administrator, and it also may need to be updated from time to time.

[2] does not support this function; we'll remedy this deficiency in .

On most systems, the command to create the index file is:

# catman -w &

It generally takes a little while to run, so you'll probably want to run it in the background.

On systems, /usr/lib/makewhatis performs this function, creating the file /usr/share/catman/whatis by default.

The situation on systems depends on which distribution you installed from. Most provide an initial whatis file, and usually include the /usr/lib/makewhatis utility for updating or recreating it. Sometimes, however, makewhatis is missing from a distribution, so you'll have to hunt for a version. Fortunately, several versions of makewhatis are widely available; see the Bibliography for information about finding software on the Internet.

Piping into grep and awk

As you undoubtedly already know, the grep command searches its input for lines containing a given pattern. Users commonly use grep to search files. What might be new is some of the ways grep is useful in pipes with many administrative commands. For example, if you want to find out about all of a certain user's current processes, pipe the output of the ps command to grep and search for her username:

% ps aux | grep chavez
chavez      8684 89.5  9.627680 5280 ?  R N  85:26 /home/j90/l988
root       10008 10.0  0.8 1408  352 p2 S     0:00 grep chavez
chavez      8679  0.0  1.4 2048  704 ?  I N   0:00 -csh (csh)
chavez      8681  0.0  1.3 2016  672 ?  I N   0:00 /usr/nqs/sc1
chavez      8683  0.0  1.3 2016  672 ?  I N   0:00 csh -cb rj90
chavez      8682  0.0  2.6 1984 1376 ?  I N   0:00 j90

This example uses the version of ps, using the options that list every single process on the system,[3] and then uses grep to pick out the ones belonging to user chavez. If you'd like the header line from ps included as well, use a command like:

[3] Under , the corresponding command is of course ps -ef.

% ps -aux | egrep 'chavez|PID'

Now that's a lot to type every time, but you could define an alias if your shell supports them. For example, in the C shell you could use this one:

% alias pu "ps -aux | egrep '\!:1|PID'"% pu chavez
chavez   8684 89.5  9.627680 5280 ?  R N  85:26 /home/j90/l988

Another useful place for grep is with man -k. For instance, I once needed to figure out where the error log file was on a new system -- the machine kept displaying annoying messages from the error log indicating that disk 3 had had a hardware failure. Now, I already knew that, and it had even been fixed. I tried man -k error: 64 matches; man -k log was even worse: 122 manual pages. But man -k log | grep error produced only 9 matches, including a nifty command to blast error log entries older than a given number of days.

If all of this fancy pipe fitting seems excessive to you, be assured that I'm not telling you about it for its own sake. The more you know the in's and out's of commands -- both basic and obscure -- the better prepared you'll be for the inevitable unexpected events that you will face. For example, you'll be able to come up with an answer quickly when the division director (or department chair or whoever) wants to know what percentage of the aggregate disk space in a local area network is used by the chem group. Virtuosity and wizardry needn't be goals in themselves, but they will help you develop two of the seven cardinal virtues of system administration: Flexibility and Ingenuity. (I'll tell you what the others are in future chapters.)

The awk command is also a useful component in pipes. It can be used to selectively manipulate the output of other commands in a more general way than grep. A complete discussion of awk is beyond the scope of this book, but a few examples will show you some of its capabilities and enable you to investigate others on your own.

One thing awk is good for is picking out and possibly rearranging columns within command output. For example, the following command produces a list of all users running the doom game:

$ ps -ef | grep "[d]oom" | awk '{print $1}'

The awk command prints only the first field from each line of ps output passed to it by grep. The search string for grep may strike you as odd, since the brackets enclose only a single character. It is constructed that way so that the ps line for the grep command itself will not be selected (since the string "doom" does not appear in it). It's basically a trick to avoid having to add grep -v grep to the pipe between the grep and awk commands.

Once you've generated the list of usernames, you can do what you need to with it. One possibility is simply to record the information in a file:

$ (date ; ps -ef | grep "[d]oom" | awk '{print $1 " [" $7 "]"}' \
   | sort | uniq) >> doomed.users

This command sends the list of users currently playing doom, along with the CPU time used so far enclosed in square brackets, to the file doomed.users, preceding the list with the current date and time. We'll see a couple of other ways to use such a list in the course of this chapter. (In case you're wondering, the square brackets around the "d" in the grep command prevent the grep command itself from being included in the list returned by ps.)

awk can also be used to sum up a column of numbers. For example, this command searches the entire local filesystem for files owned by user chavez and adds up all of their sizes:

# find / -user chavez -fstype 4.2 ! -name /dev/\* -ls | awk \
  '{sum+=$7}; END {print "User chavez total disk use = " sum}'
User chavez total disk use = 41987453

The awk component of this command accumulates a running total of the seventh column from the find command that holds the number of bytes in each file, and it prints out the final value after the last line of its input has been processed. awk can also compute averages; in this case the average number of bytes per file would be given by the expression sum/NR placed into the command's END clause, where the denominator is an awk internal variable holding the line number of the current input line and accordingly indicating the total number of lines read once all of them have been processed.

awk can be used in a similar way with the date command to generate a filename based upon the current date. For example, the following command places the output of the sys_doc script into a file named for the current date and host:

$ sys_doc  > `date | awk '{print $3 $2 $6}'`.`hostname`.sysdoc

If this command were run on October 24, 1999 on host ophelia, the filename generated by the command would be 24Oct1999.ophelia.sysdoc.

Recent implementations of date allow it to generate such strings on its own, eliminating the need for awk. The following command illustrates these features. It constructs a unique filename for a scratch file by telling date to display the literal string "junk_" followed by the day of the month, short form month name, 2-digit year, and hour, minutes and seconds of the current time, ending with the literal string ".junk":

$ date +junk_%d%b%y%H%M%S.junkjunk_08Dec94204256.junk

We'll see more examples of grep and awk later in this chapter.

Finding Files

Another common command of great use to a system administrator is find. find is one of those commands that you wonder how you ever lived without -- once you learn it. It has one of the most obscure manual pages in the canon, so I'll spend a bit of time explaining it (skip ahead if it's already familiar).

find locates files having certain common characteristics, which you specify, anywhere on the system that you tell it to look. Conceptually,[4] find has the following syntax:

[4] Syntactically, find does not distinguish between file selection-options and action-related options, but it is often helpful to think of them as separate types as you learn to use find.

# find  starting-dir(s)  matching-criteria-and-actions

Starting-dir(s) is the set of directories where find should start looking for files. By default, find searches all directories underneath the listed directories. Thus, specifying / as the starting directory would search the entire filesystem.

The matching-criteria tell find what sorts of files you want to look for. Some of the most useful are shown in Table 3.1. Starred items are extensions to the generic find command that are not yet offered by all versions.

Table 3.1: find Matching Criteria
-atime nFile was last accessed exactly n days ago.
-mtime nFile was last modified exactly n days ago.
-newer fileFile was modified more recently than file was.
-size nFile is exactly n 512-byte blocks long.
-type cSpecifies the file type: f=plain file, d=directory, etc.
-fstype typSpecifies filesystem type.*
-name namThe filename is nam.
-perm pThe file's access mode is p.
-user usrThe file's owner is usr.
-group grpThe file's group owner is grp.
-nouserThe file's owner is not listed in the password file.*
-nogroupThe file's group owner is not listed in the group file.*

These may not seem all that useful -- why would you want a file accessed exactly three days ago, for instance? However, you may precede time periods, sizes, and other numeric quantities with a plus sign (meaning "more than") or a minus sign (meaning "less than") to get more useful criteria. Here are some examples:

-mtime +7     Last modified more than 7 days ago

-atime -2     Last accessed less than 2 days ago
-size +100    Larger than 50K
You can also include wildcards with the -name option, provided that you quote them. For example, the criteria -name '*.dat' specifies all filenames ending in .dat.

Multiple conditions are joined with AND by default. Thus, to look for files last accessed more than two months ago and last modified more than four months ago, you would use these options:

-atime +60 -mtime +120

Options may also be joined with -o for OR combination, and grouping is allowed using escaped parentheses. For example, the matching criteria below specifies files last accessed more than seven days ago or last modified more than 30 days ago:

\( -atime +7 -o -mtime +30 \)

An exclamation point may be used for NOT (be sure to quote it if you're using the C shell). For example, the matching criteria below specify all .dat files except gold.dat:

! -name gold.dat -name \*.dat

The -perm option allows you to search for files with a specific access mode (numeric form). Using an unsigned value specifies files with exactly that permission setting, and preceding the value with a minus sign searches for files with at least the specified access.[5] Here are some examples:

-perm 75       Permission = rwxr-xr-x

-perm -002     World-writable files
-perm -4000    ;SUID access is set
-perm -2000    ;SGID access is set
The actions options tell find what to do with each file it locates that matches all the specified criteria. Some available actions are shown in Table 3.2 (with starred items again not available in all versions of find).

[5] In other words, the specified permission mode is XOR'ed with the file's permission setting.

Table 3.2: find Actions
-printDisplay pathname of matching file.
-lsDisplay long directory listing for matching file.*
-exec cmdExecute command on file.
-ok cmdPrompt before executing command on file.
-xdevRestrict the search to the filesystem of the starting directory.
-mountSame as -xdev for and .
-pruneDon't descend into directories encountered.*

The default on many newer systems is -print, although forgetting to include it on older systems like will result in a successful command with no output. Commands for -exec and -ok must end with an escaped semicolon (\;). The form {} may be used in commands as a placeholder for the pathname of each found file. For example, to delete each matching file as it is found, specify the following option:

-exec rm -f {} \;

Note that there are no spaces between the opening and closing curly braces.

Now let's put the parts together. The command below lists the pathname of all C source files under the current directory:

$ find . -name \*.c -print 

The starting directory is "." (the current directory), the matching criteria specify filenames ending in .c, and the action to be performed is to display the pathname of each matching file. This is a typical user use for find. Other common uses include searching for misplaced files and feeding file lists to cpio.

find has many administrative uses, including:

For example, find may be used to locate large disk files. The command below displays a long directory listing for all files under /chem larger than 1 MB (2048 512-byte blocks) that haven't been modified in a month:

$ find /chem -size +2048 -mtime +30 -exec ls -l {} \;

Of course, we could also use -ls rather than the -exec clause. To search for files not modified in a month or not accessed in three months, use this command:

$ find /chem -size +2048 \( -mtime +30 -o -atime +120 \) -ls

Such old, large files might be candidates for tape backup and deletion if disk space is short.

find can also delete files automatically as it finds them. The following is a typical administrative use of find, designed to automatically delete old junk files on the system:

# find / \( -name a.out -o -name core -o -name '*~'\
    -o -name '.*~' -o -name '#*#' \) -type f -atime +14 \
    -exec rm -f {} \; -o -fstype nfs -prune

This command searches the entire filesystem and removes various editor backup files, core dump files, and random executables (a.out) that haven't been accessed in two weeks and that don't reside on a remotely mounted filesystem. The logic is messy: the final -o option OR's all the options that preceded it with those that followed it, each of which is computed separately. Thus, the final operation finds files that match either of two criteria:

If the first criteria set is true, the file gets removed; if the second set is true, a "prune" action takes place, which says "don't descend any lower into the directory tree." Thus, every time find comes across an NFS-mounted filesystem, it will move on, rather than searching its entire contents as well.

Matching criteria and actions may be placed in any order, and are evaluated from left to right. For example, the following find command lists all regular files under the directories /home and /aux1 that are larger than 500K and were last accessed over 30 days ago (done by the options through -print); additionally, it removes those named core:

# find /home /aux1 -type f -atime +30 -size +1000 -print \
    -name core -exec rm {} \;

find also has security uses. For example, the following find command lists all files that have SUID or SGID access set (see in ).

# find / -type f \( -perm -2000 -o -perm -4000 \) -print

The output from this command could be compared to a saved list of SUID and SGID files, in order to locate any newly-created ones requiring investigation:

# find / \( -perm -2000 -o -perm -4000 \) -print | \
  diff -

find may also be used to perform the same operation on a selected group of files. For example, the command below changes the ownership of all the files under user chavez's home directory to user chavez and group physics:

# find /home/chavez -exec chown chavez {} \; \
    -exec chgrp physics {} \;

This command gathers all C source files anywhere under /chem into the directory /chem1/src:

# find /chem -name '*.c' -exec mv {} /chem1/src \;

Similarly, this command runs the script prettify on every C source file under /chem:

# find /chem -name '*.c' -exec /usr/local/bin/prettify {} \;

Note that the full pathname for the script is included in the -exec clause.

Repeating Commands

find is one solution when you need to perform the same operation on a group of files. The xargs command is another way of automating similar commands on a group of objects; xargs is more flexible than find because it can operate on any set of objects, regardless of what kind they are, while find is limited to files.

xargs is most often used as the final component of a pipe. It appends the items it reads from standard input to the command given as its argument. For example, the following command increases the nice number of all doom processes by 10, thereby lowering each process's priority:

# ps -ef | grep "[d]oom" | awk '{print $2}' | xargs renice +10

The pipe preceding the xargs command extracts the process ID from the second column of the ps output for each instance of doom, and then xargs runs renice using all of them. The renice command takes multiple process ID's as its arguments, so there is no problem sending all of the PID's to a single renice command as long as there are not a truly inordinate number of doom processes.

You can also tell xargs to send its incoming arguments to the specified command in groups by using its -n option, which takes the number of items to use at a time as its argument. If you wanted to run a script for each user who is currently running doom, for example, you could use this command:

# ps -ef | grep "[d]oom" | awk '{print $1}' | xargs -n1 warn_user

The xargs command will take each username in turn and use it as the argument to warn_user. So far, all of the xargs commands we've look at have placed the incoming items at the end of the specified command. However, xargs also allows you to place each incoming line of input at a specified position within the command to be executed. To do so, you include its -i option and use the form {} as placeholder for each incoming line within the command. For example, this command will run the chargefee utility for each user running doom, assessing them 100 units:

# ps -ef | grep "[d]oom" | awk '{print $1}' | xargs -i chargefee {} 100

If curly braces are needed elsewhere within the command, you can specify a different pair of placeholder characters as the argument to -i.

Substitutions like this can get rather complicated. xargs's -t option will display each constructed command before executing, and the -p option will allow you to selectively execute commands by prompting you before each one. Using both of them together provides the safest execution mode and also enables you to nondestructively debug a command or script by answering no for every offered command.

-i and -n don't interact the way you might think they would. Consider this command:

$ echo a b c d e f | xargs -n3 -i echo before {} after

before a b c d e f after $ echo a b c d e f | xargs -i -n3 echo before {} after
before {} after a b c
before {} after d e f

You might expect that these two commands would be equivalent and that they would both produce two lines of output:

before a b c after
before d e f after

However, neither command gives you this, and the two commands do not operate identically. What is happening is that -i and -n conflict with one another, and the one appearing last wins. So, in the first command, -i is what is operative, and each line of input is inserted into the echo command, while in the second command, the -n3 option is used, and three arguments are placed at the end of each echo command and the curly braces are treated as literal characters.

The reason our first use of -i worked properly is because the usernames are coming from separate lines in the ps command output, and these lines are retained as they flow through the pipe to xargs.

If you want xargs to execute commands containing pipes, I/O redirection, compound commands joined with semicolons, and so on, there's a bit of a trick: use the -c option to a shell to execute the desired command. I occasionally want to look at the final lines of a group of files and then view all of them a screen at a time; in other words, I'd like to run a command like this and have it "work":

$ tail test00* | more

This command displays lines only from the last file. However, I can use xargs to get what I want:

$ ls -1 test00* | xargs -i /usr/bin/sh -c \
  'echo "****** {}:"; tail -15 {}; echo ""' | more

This displays the last 15 lines of each file, preceded by a header line containing the filename and followed by a blank line for readability.

You can use a similar method for lots of other kinds of repetitive operations. For example, this command will sort and dedup all of the .dat files in the current directory:

$ ls -1 *.dat | xargs -i /usr/bin/sh -c \
  "sort {} | uniq > junk.junk ; mv junk.junk {}"

Creating Several Directory Levels at Once

Many people are unaware of the options offered by the mkdir command. These options allow you to set the file mode at the same time as you create a new directory and to create multiple levels of subdirectories with a single command, both of which can make your use of mkdir much more efficient.

For example, each of the following two commands sets the mode on the new directory to rwxr-xr-x, using mkdir's -m option:

$ mkdir -m 755 ./people$ mkdir -m u=rwx,go=rx ./places

You can use either a numeric mode or a symbolic mode as the argument to the -m option. You can also use a relative symbolic mode, as in this example:

$ mkdir -m g+w ./things

In this case, the mode changes are applied to the default mode as set with the umask command.

mkdir's -p option tells it to create any missing parents required for the subdirectories specified as its arguments. For example, the following command will create the subdirectories ./a and ./a/b if they do not already exist and then create ./a/b/c:

$ mkdir -p ./a/b/c

The same command without -p will give an error if all of the parent subdirectories are not already present.

Duplicating an Entire Directory Tree

It is fairly common to need to move or duplicate an entire directory tree, preserving not only the directory structure and file contents but also the ownership and mode settings for every file. There are several ways to accomplish this, using tar, cpio, and sometimes even cp. I'll focus on tar, and look briefly at the others at the end of this section.

Let's make this task more concrete and assume that we want to copy the directory /chem/olddir as /chem1/newdir (in other words, we want to change the name of the olddir subdirectory as part of duplicating its entire contents). We can take advantage of tar's -p option, which restores ownership and access modes along with the files from an archive (it must be run as root to set file ownership), and use these commands to create the new directory tree:

# cd /chem1

# tar -cf - -C /chem olddir | tar -xvpf -
# mv olddir newdir

The first tar command creates an archive consisting of /chem/olddir and all of the files and directories underneath it and writes it to standard output (indicated by the - argument of the -f option). The -C option sets the current directory for the first tar command to /chem. The second tar command extracts files from standard input (again indicated by f -), retaining their previous ownership and protection. The second tar command gives detailed output (requested with the -v option). The final mv command changes the name of the newly created subdirectory of /chem1 to newdir.

If you want only a subset of the files and directories under olddir to be copied to newdir, you would vary the previous commands slightly. For example, these commands copy the src, bin, and data subdirectories and the logfile and .profile files from olddir to newdir, duplicating their ownership and protection.

# mkdir /chem1/newdir

set ownership and protection for newdir if necessary
# cd /chem1/newdir
# tar -cvf - -C /chem/olddir src bin data logfile .profile | tar -xvpf -

The first two commands are only necessary if /chem1/newdir does not already exist. Wildcards within the list of items to be copied do not work.

This command performs a similar operation, copying only a single branch of the subtree under olddir:

# mkdir /chem1/newdir
set ownership and protection for newdir if necessary
# cd /chem1/newdir
# tar -cvf - -C /chem/olddir src/viewers/rasmol | tar -xvpf -

These commands will create /chem1/newdir/src and its viewers subdirectory, but will place nothing in them except rasmol.

If you prefer cpio to tar, it will perform similar functions. For example, this command will copy the entire olddir tree to /chem1 (again as newdir):

# mkdir /chem1/newdir
set ownership and protection for newdir if necessary
# cd /chem1/olddir
# find . -print | cpio -pdvm /chem1/newdir

The cpio command also used the -p option to retain file ownership and access modes.

On all of the systems we are considering except , the cp command has a -p option as well, and these commands would create newdir:

# cp -pr /chem/olddir /chem1
# mv /chem1/olddir /chem1/newdir

The -r option stands for recursive and causes cp to duplicate the source directory structure in the new location.

Be aware that tar works differently than cp does in the case of symbolic links. tar will recreate links in the new location, while cp converts symbolic links to files.

Comparing Directories

Over time, the two directories we considered in the last section will undoubtedly both change. At some future point, you might need to determine the differences between them. dircmp is a special-purpose utility designed to perform this very operation.[6] dircmp takes the directories to be compared as its arguments:

[6] As of this writing, not available for . However, diff -r provides the same functionality.

$ dircmp /chem/olddir /chem1/newdir

dircmp produces voluminous output even when the directories you're comparing are small. There are two main sections to the output. The first one lists files that are present in only one of the two directory trees:

Mon Jan 4 1995 /chem/olddir only and /chem1/newdir only  Page 1
./water.dat                  ./hf.dat
./src/viewers/rasmol/init.c  ./h2f.dat

All pathnames in the report are relative to the directory locations specified on the command line. In this case, the files in the left column are present only under /chem/olddir while those in the right column are present only at the new location.

The second part of the report indicates whether the files present in both directory trees are the same or not. Here are some typical lines from this section of the report:

same                           ./h2o.dat
different                      ./hcl.dat

The default output from dircmp indicates only whether the corresponding files are the same or not, and sometimes this is all you need to know. If you want to know exactly what the differences are, you can include the -d to dircmp, which tells it to run diff for each pair of differing files (of course, this only works for text files). On the other hand, if you want to decrease the amount of output by limiting the second section of the report to files that differ, include the -s option on the dircmp command.

Deleting Pesky Files

When I teach courses for new users, one of the early exercises consists of figuring out how to delete the files –delete_me and delete me (with the embedded space in the second case).[7] Occasionally, however, a user will wind up with a file that he just can't get rid of, no matter how creative he is in using rm. At that point, he will come to you. If there is a way to get rm to do the job, show it to him, but there are some files that rm just can't handle. For example, it is possible for some buggy application program to put a file into a bizarre, inconclusive state: quasi-directories that rm interprets as such but rmdir doesn't, files that ls lists but rm can't find, and the like. Users can also create such files if they experiment with certain filesystem manipulation tools (which they shouldn't be using in the first place).

[7] There are lots of solutions. One of the simplest is: rm delete\ me ./-delete_me

One tool that can take care of such intransigent files is the directory editor feature of the GNU emacs text editor (see , for information about where to find this widely-available software).

This is the procedure for deleting a file with emacs:

emacs can also be useful for viewing directory contents when they include files with bizarre characters embedded within them. The most amusing example of this that I can cite is a user who complained to me that the ls command always beeped at him every time he ran it. It turned out that this only happened in his home directory, and it was due to a file with a CTRL-G in the middle of the name. The filename looked fine in ls listings because the CTRL-G character was being interpreted, causing the beep. Control characters become visible when you look at the directory in emacs, and so the problem was easily diagnosed and remedied (using the "r" subcommand to emacs's directory editing mode that renames a file).

Starting at the End

Perhaps it's appropriate that we consider the tail command near the end of this chapter on administrative tools. tail's principal function is to display the last 10 lines of a file (or standard input). tail also has a -f option that displays new lines as they are added to the end of a file; this mode can be useful for monitoring, for instance, the progress of a tar command. These commands start a background backup with tar, saving its output to a file, and monitor the operation using tail -f:

$ tar -cvf /dev/rmt1 /chem /chem1 > 24oct94_tar.toc &
$ tail -f 24oct94_tar.toc

The information that tar displays about each file as it is written to tape is eventually written to the table of contents file and displayed by tail. The advantage that this method has over the tee command is that the tail command may be killed and restarted as many times as you like without affecting the tar command.

Some versions of tail also include a -r option, which will display the lines in a file in reverse order, which is occasionally useful. and do not support this option, and provides this feature in the tac command.

Be Creative

As a final example of the creative use of ordinary commands, consider the following dilemma. A user tells you his workstation won't reboot. He says he was changing his system's boot script but may have deleted some files in /etc accidentally. You go over to it, type ls and get a message about some missing shared libraries. How do you poke around and find out what files are there?

The answer is to use the simplest command there is, echo (which is simple enough not to need shared libraries), along with the wildcard mechanism built into every shell. To see all the files in the current directory, just type:

$ echo *

which tells the shell to display the value of "*", which of course expands to all files not beginning with a period in the current directory.

By using echo together with cd (also a built-in shell command), I was able to get a pretty good idea of what had happened. I'll tell you the rest of this story at the end of Chapter 4, Startup and Shutdown.

Back to: Essential System Administration, 2nd Edition

O'Reilly Home | O'Reilly Bookstores | How to Order | O'Reilly Contacts
International | About O'Reilly | Affiliated Companies

© 2001, O'Reilly & Associates, Inc.