GNU Coreutils

Short Table of Contents

Table of Contents

GNU Coreutils

This manual documents version 8.32 of the GNU core utilities, including the standard programs for text and file manipulation.

Copyright © 1994-2020 Free Software Foundation, Inc.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, with no Front-Cover Texts, and with no Back-Cover Texts. A copy of the license is included in the section entitled “GNU Free Documentation License”.

1 Introduction

This manual is a work in progress: many sections make no attempt to explain basic concepts in a way suitable for novices. Thus, if you are interested, please get involved in improving this manual. The entire GNU community will benefit.

The GNU utilities documented here are mostly compatible with the POSIX standard.

Please report bugs to bug-coreutils@gnu.org. Include the version number, machine architecture, input files, and any other information needed to reproduce the bug: your input, what you expected, what you got, and why it is wrong.

If you have a problem with sort or date , try using the --debug option, as it can can often help find and fix problems without having to wait for an answer to a bug report. If the debug output does not suffice to fix the problem on your own, please compress and attach it to the rest of your bug report.

Although diffs are welcome, please include a description of the problem as well, since this is sometimes difficult to infer. See Bugs in Using and Porting GNU CC .

This manual was originally derived from the Unix man pages in the distributions, which were written by David MacKenzie and updated by Jim Meyering. What you are reading now is the authoritative documentation for these utilities; the man pages are no longer being maintained. The original fmt man page was written by Ross Paterson. François Pinard did the initial conversion to Texinfo format. Karl Berry did the indexing, some reorganization, and editing of the results. Brian Youmans of the Free Software Foundation office staff combined the manuals for textutils, fileutils, and sh-utils to produce the present omnibus manual. Richard Stallman contributed his usual invaluable insights to the overall process.

2 Common options

Certain options are available in all of these programs. Rather than writing identical descriptions for each of the programs, they are described here. (In fact, every GNU program accepts (or should accept) these options.)

Normally options and operands can appear in any order, and programs act as if all the options appear before any operands. For example, ‘ sort -r passwd -t : ’ acts like ‘ sort -r -t : passwd ’, since ‘ : ’ is an option-argument of -t . However, if the POSIXLY_CORRECT environment variable is set, options must appear before operands, unless otherwise specified for a particular command.

A few programs can usefully have trailing operands with leading ‘ - ’. With such a program, options must precede operands even if POSIXLY_CORRECT is not set, and this fact is noted in the program description. For example, the env command’s options must appear before its operands, since in some cases the operands specify a command that itself contains options.

Most programs that accept long options recognize unambiguous abbreviations of those options. For example, ‘ rmdir --ignore-fail-on-non-empty ’ can be invoked as ‘ rmdir --ignore-fail ’ or even ‘ rmdir --i ’. Ambiguous options, such as ‘ ls --h ’, are identified as such.

Some of these programs recognize the --help and --version options only when one of them is the sole command line argument. For these programs, abbreviations of the long options are not always recognized.

‘ --help ’ Print a usage message listing all available options, then exit successfully. ‘ --version ’ Print the version number, then exit successfully. ‘ -- ’ Delimit the option list. Later arguments, if any, are treated as operands even if they begin with ‘ - ’. For example, ‘ sort -- -r ’ reads from the file named -r .

A single ‘ - ’ operand is not really an option, though it looks like one. It stands for a file operand, and some tools treat it as standard input, or as standard output if that is clear from the context. For example, ‘ sort - ’ reads from standard input, and is equivalent to plain ‘ sort ’. Unless otherwise specified, a ‘ - ’ can appear as any operand that requires a file name.

2.1 Exit status

Nearly every command invocation yields an integral exit status that can be used to change how other commands work. For the vast majority of commands, an exit status of zero indicates success. Failure is indicated by a nonzero value—typically ‘ 1 ’, though it may differ on unusual platforms as POSIX requires only that it be nonzero.

However, some of the programs documented here do produce other exit status values and a few associate different meanings with the values ‘ 0 ’ and ‘ 1 ’. Here are some of the exceptions: chroot , env , expr , nice , nohup , numfmt , printenv , sort , stdbuf , test , timeout , tty .

2.2 Backup options

Some GNU programs (at least cp , install , ln , and mv ) optionally make backups of files before writing new versions. These options control the details of these backups. The options are also briefly mentioned in the descriptions of the particular programs.

‘ -b ’ ‘ --backup[= method ] ’ Make a backup of each file that would otherwise be overwritten or removed. Without this option, the original versions are destroyed. Use method to determine the type of backups to make. When this option is used but method is not specified, then the value of the VERSION_CONTROL environment variable is used. And if VERSION_CONTROL is not set, the default backup type is ‘ existing ’. Note that the short form of this option, -b does not accept any argument. Using -b is equivalent to using --backup=existing . This option corresponds to the Emacs variable ‘ version-control ’; the values for method are the same as those used in Emacs. This option also accepts more descriptive names. The valid method s are (unique abbreviations are accepted): ‘ none ’ ‘ off ’ Never make backups. ‘ numbered ’ ‘ t ’ Always make numbered backups. ‘ existing ’ ‘ nil ’ Make numbered backups of files that already have them, simple backups of the others. ‘ simple ’ ‘ never ’ Always make simple backups. Please note ‘ never ’ is not to be confused with ‘ none ’. ‘ -S suffix ’ ‘ --suffix= suffix ’ Append suffix to each backup file made with -b . If this option is not specified, the value of the SIMPLE_BACKUP_SUFFIX environment variable is used. And if SIMPLE_BACKUP_SUFFIX is not set, the default is ‘ ~ ’, just as in Emacs.

2.3 Block size

Some GNU programs (at least df , du , and ls ) display sizes in “blocks”. You can adjust the block size and method of display to make sizes easier to read. The block size used for display is independent of any file system block size. Fractional block counts are rounded up to the nearest integer.

The default block size is chosen by examining the following environment variables in turn; the first one that is set determines the block size.

DF_BLOCK_SIZE This specifies the default block size for the df command. Similarly, DU_BLOCK_SIZE specifies the default for du and LS_BLOCK_SIZE for ls . BLOCK_SIZE This specifies the default block size for all three commands, if the above command-specific environment variables are not set. BLOCKSIZE This specifies the default block size for all values that are normally printed as blocks, if neither BLOCK_SIZE nor the above command-specific environment variables are set. Unlike the other environment variables, BLOCKSIZE does not affect values that are normally printed as byte counts, e.g., the file sizes contained in ls -l output. POSIXLY_CORRECT If neither command _BLOCK_SIZE , nor BLOCK_SIZE , nor BLOCKSIZE is set, but this variable is set, the block size defaults to 512.

If none of the above environment variables are set, the block size currently defaults to 1024 bytes in most contexts, but this number may change in the future. For ls file sizes, the block size defaults to 1 byte.

A block size specification can be a positive integer specifying the number of bytes per block, or it can be human-readable or si to select a human-readable format. Integers may be followed by suffixes that are upward compatible with the SI prefixes for decimal multiples and with the ISO/IEC 80000-13 (formerly IEC 60027-2) prefixes for binary multiples.

With human-readable formats, output sizes are followed by a size letter such as ‘ M ’ for megabytes. BLOCK_SIZE=human-readable uses powers of 1024; ‘ M ’ stands for 1,048,576 bytes. BLOCK_SIZE=si is similar, but uses powers of 1000 and appends ‘ B ’; ‘ MB ’ stands for 1,000,000 bytes.

A block size specification preceded by ‘ ' ’ causes output sizes to be displayed with thousands separators. The LC_NUMERIC locale specifies the thousands separator and grouping. For example, in an American English locale, ‘ --block-size="'1kB" ’ would cause a size of 1234000 bytes to be displayed as ‘ 1,234 ’. In the default C locale, there is no thousands separator so a leading ‘ ' ’ has no effect.

An integer block size can be followed by a suffix to specify a multiple of that size. A bare size letter, or one followed by ‘ iB ’, specifies a multiple using powers of 1024. A size letter followed by ‘ B ’ specifies powers of 1000 instead. For example, ‘ 1M ’ and ‘ 1MiB ’ are equivalent to ‘ 1048576 ’, whereas ‘ 1MB ’ is equivalent to ‘ 1000000 ’.

A plain suffix without a preceding integer acts as if ‘ 1 ’ were prepended, except that it causes a size indication to be appended to the output. For example, ‘ --block-size="kB" ’ displays 3000 as ‘ 3kB ’.

The following suffixes are defined. Large sizes like 1Y may be rejected by your computer due to limitations of its arithmetic.

‘ kB ’ kilobyte: 10^3 = 1000. ‘ k ’ ‘ K ’ ‘ KiB ’ kibibyte: 2^{10} = 1024. ‘ K ’ is special: the SI prefix is ‘ k ’ and the ISO/IEC 80000-13 prefix is ‘ Ki ’, but tradition and POSIX use ‘ k ’ to mean ‘ KiB ’. ‘ MB ’ megabyte: 10^6 = 1,000,000. ‘ M ’ ‘ MiB ’ mebibyte: 2^{20} = 1,048,576. ‘ GB ’ gigabyte: 10^9 = 1,000,000,000. ‘ G ’ ‘ GiB ’ gibibyte: 2^{30} = 1,073,741,824. ‘ TB ’ terabyte: 10^{12} = 1,000,000,000,000. ‘ T ’ ‘ TiB ’ tebibyte: 2^{40} = 1,099,511,627,776. ‘ PB ’ petabyte: 10^{15} = 1,000,000,000,000,000. ‘ P ’ ‘ PiB ’ pebibyte: 2^{50} = 1,125,899,906,842,624. ‘ EB ’ exabyte: 10^{18} = 1,000,000,000,000,000,000. ‘ E ’ ‘ EiB ’ exbibyte: 2^{60} = 1,152,921,504,606,846,976. ‘ ZB ’ zettabyte: 10^{21} = 1,000,000,000,000,000,000,000 ‘ Z ’ ‘ ZiB ’ 2^{70} = 1,180,591,620,717,411,303,424. ‘ YB ’ yottabyte: 10^{24} = 1,000,000,000,000,000,000,000,000. ‘ Y ’ ‘ YiB ’ 2^{80} = 1,208,925,819,614,629,174,706,176.

Block size defaults can be overridden by an explicit --block-size= size option. The -k option is equivalent to --block-size=1K , which is the default unless the POSIXLY_CORRECT environment variable is set. The -h or --human-readable option is equivalent to --block-size=human-readable . The --si option is equivalent to --block-size=si . Note for ls the -k option does not control the display of the apparent file sizes, whereas the --block-size option does.

2.4 Floating point numbers

Commands that accept or produce floating point numbers employ the floating point representation of the underlying system, and suffer from rounding error, overflow, and similar floating-point issues. Almost all modern systems use IEEE-754 floating point, and it is typically portable to assume IEEE-754 behavior these days. IEEE-754 has positive and negative infinity, distinguishes positive from negative zero, and uses special values called NaNs to represent invalid computations such as dividing zero by itself. For more information, please see David Goldberg’s paper What Every Computer Scientist Should Know About Floating-Point Arithmetic.

Commands that accept floating point numbers as options, operands or input use the standard C functions strtod and strtold to convert from text to floating point numbers. These floating point numbers therefore can use scientific notation like 1.0e-34 and -10e100 . Commands that parse floating point also understand case-insensitive inf , infinity , and NaN , although whether such values are useful depends on the command in question. Modern C implementations also accept hexadecimal floating point numbers such as -0x.ep-3 , which stands for -14/16 times 2^-3, which equals -0.109375. See Parsing of Floats in The GNU C Library Reference Manual .

Normally the LC_NUMERIC locale determines the decimal-point character. However, some commands’ descriptions specify that they accept numbers in either the current or the C locale; for example, they treat ‘ 3.14 ’ like ‘ 3,14 ’ if the current locale uses comma as a decimal point.

2.5 Signal specifications

A signal may be a signal name like ‘ HUP ’, or a signal number like ‘ 1 ’, or an exit status of a process terminated by the signal. A signal name can be given in canonical form or prefixed by ‘ SIG ’. The case of the letters is ignored. The following signal names and numbers are supported on all POSIX compliant systems:

‘ HUP ’ 1. Hangup. ‘ INT ’ 2. Terminal interrupt. ‘ QUIT ’ 3. Terminal quit. ‘ ABRT ’ 6. Process abort. ‘ KILL ’ 9. Kill (cannot be caught or ignored). ‘ ALRM ’ 14. Alarm Clock. ‘ TERM ’ 15. Termination.

Other supported signal names have system-dependent corresponding numbers. All systems conforming to POSIX 1003.1-2001 also support the following signals:

‘ BUS ’ Access to an undefined portion of a memory object. ‘ CHLD ’ Child process terminated, stopped, or continued. ‘ CONT ’ Continue executing, if stopped. ‘ FPE ’ Erroneous arithmetic operation. ‘ ILL ’ Illegal Instruction. ‘ PIPE ’ Write on a pipe with no one to read it. ‘ SEGV ’ Invalid memory reference. ‘ STOP ’ Stop executing (cannot be caught or ignored). ‘ TSTP ’ Terminal stop. ‘ TTIN ’ Background process attempting read. ‘ TTOU ’ Background process attempting write. ‘ URG ’ High bandwidth data is available at a socket. ‘ USR1 ’ User-defined signal 1. ‘ USR2 ’ User-defined signal 2.

POSIX 1003.1-2001 systems that support the XSI extension also support the following signals:

‘ POLL ’ Pollable event. ‘ PROF ’ Profiling timer expired. ‘ SYS ’ Bad system call. ‘ TRAP ’ Trace/breakpoint trap. ‘ VTALRM ’ Virtual timer expired. ‘ XCPU ’ CPU time limit exceeded. ‘ XFSZ ’ File size limit exceeded.

POSIX 1003.1-2001 systems that support the XRT extension also support at least eight real-time signals called ‘ RTMIN ’, ‘ RTMIN+1 ’, …, ‘ RTMAX-1 ’, ‘ RTMAX ’.

2.6 chown, chgrp, chroot, id: Disambiguating user names and IDs

Since the user and group arguments to these commands may be specified as names or numeric IDs, there is an apparent ambiguity. What if a user or group name is a string of digits? 1 Should the command interpret it as a user name or as an ID? POSIX requires that these commands first attempt to resolve the specified string as a name, and only once that fails, then try to interpret it as an ID. This is troublesome when you want to specify a numeric ID, say 42, and it must work even in a pathological situation where ‘ 42 ’ is a user name that maps to some other user ID, say 1000. Simply invoking chown 42 F , will set F s owner ID to 1000—not what you intended.

GNU chown , chgrp , chroot , and id provide a way to work around this, that at the same time may result in a significant performance improvement by eliminating a database look-up. Simply precede each numeric user ID and/or group ID with a ‘ + ’, in order to force its interpretation as an integer:

chown +42 F chgrp +$numeric_group_id another-file chown +0:+0 /

The name look-up process is skipped for each ‘ + ’-prefixed string, because a string containing ‘ + ’ is never a valid user or group name. This syntax is accepted on most common Unix systems, but not on Solaris 10.

2.7 Sources of random data

The shuf , shred , and sort commands sometimes need random data to do their work. For example, ‘ sort -R ’ must choose a hash function at random, and it needs random data to make this selection.

By default these commands use an internal pseudo-random generator initialized by a small amount of entropy, but can be directed to use an external source with the --random-source= file option. An error is reported if file does not contain enough bytes.

For example, the device file /dev/urandom could be used as the source of random data. Typically, this device gathers environmental noise from device drivers and other sources into an entropy pool, and uses the pool to generate random bits. If the pool is short of data, the device reuses the internal pool to produce more bits, using a cryptographically secure pseudo-random number generator. But be aware that this device is not designed for bulk random data generation and is relatively slow.

/dev/urandom suffices for most practical uses, but applications requiring high-value or long-term protection of private data may require an alternate data source like /dev/random or /dev/arandom . The set of available sources depends on your operating system.

To reproduce the results of an earlier invocation of a command, you can save some random data into a file and then use that file as the random source in earlier and later invocations of the command. Rather than depending on a file, one can generate a reproducible arbitrary amount of pseudo-random data given a seed value, using for example:

get_seeded_random() { seed="$1" openssl enc -aes-256-ctr -pass pass:"$seed" -nosalt \ </dev/zero 2>/dev/null } shuf -i1-100 --random-source=<(get_seeded_random 42)

2.8 Target directory

The cp , install , ln , and mv commands normally treat the last operand specially when it is a directory or a symbolic link to a directory. For example, ‘ cp source dest ’ is equivalent to ‘ cp source dest/source ’ if dest is a directory. Sometimes this behavior is not exactly what is wanted, so these commands support the following options to allow more fine-grained control:

‘ -T ’ ‘ --no-target-directory ’ Do not treat the last operand specially when it is a directory or a symbolic link to a directory. This can help avoid race conditions in programs that operate in a shared area. For example, when the command ‘ mv /tmp/source /tmp/dest ’ succeeds, there is no guarantee that /tmp/source was renamed to /tmp/dest : it could have been renamed to /tmp/dest/source instead, if some other process created /tmp/dest as a directory. However, if mv -T /tmp/source /tmp/dest succeeds, there is no question that /tmp/source was renamed to /tmp/dest . In the opposite situation, where you want the last operand to be treated as a directory and want a diagnostic otherwise, you can use the --target-directory ( -t ) option. ‘ -t directory ’ ‘ --target-directory= directory ’ Use directory as the directory component of each destination file name. The interface for most programs is that after processing options and a finite (possibly zero) number of fixed-position arguments, the remaining argument list is either expected to be empty, or is a list of items (usually files) that will all be handled identically. The xargs program is designed to work well with this convention. The commands in the mv -family are unusual in that they take a variable number of arguments with a special case at the end (namely, the target directory). This makes it nontrivial to perform some operations, e.g., “move all files from here to ../d/”, because mv * ../d/ might exhaust the argument space, and ls | xargs ... doesn’t have a clean way to specify an extra final argument for each invocation of the subject command. (It can be done by going through a shell command, but that requires more human labor and brain power than it should.) The --target-directory ( -t ) option allows the cp , install , ln , and mv programs to be used conveniently with xargs . For example, you can move the files from the current directory to a sibling directory, d like this: ls | xargs mv -t ../d -- However, this doesn’t move files whose names begin with ‘ . ’. If you use the GNU find program, you can move those files too, with this command: find . -mindepth 1 -maxdepth 1 \ | xargs mv -t ../d But both of the above approaches fail if there are no files in the current directory, or if any file has a name containing a blank or some other special characters. The following example removes those limitations and requires both GNU find and GNU xargs : find . -mindepth 1 -maxdepth 1 -print0 \ | xargs --null --no-run-if-empty \ mv -t ../d

The --target-directory ( -t ) and --no-target-directory ( -T ) options cannot be combined.

2.9 Trailing slashes

Some GNU programs (at least cp and mv ) allow you to remove any trailing slashes from each source argument before operating on it. The --strip-trailing-slashes option enables this behavior.

This is useful when a source argument may have a trailing slash and specify a symbolic link to a directory. This scenario is in fact rather common because some shells can automatically append a trailing slash when performing file name completion on such symbolic links. Without this option, mv , for example, (via the system’s rename function) must interpret a trailing slash as a request to dereference the symbolic link and so must rename the indirectly referenced directory and not the symbolic link. Although it may seem surprising that such behavior be the default, it is required by POSIX and is consistent with other parts of that standard.

2.10 Traversing symlinks

The following options modify how chown and chgrp traverse a hierarchy when the --recursive ( -R ) option is also specified. If more than one of the following options is specified, only the final one takes effect. These options specify whether processing a symbolic link to a directory entails operating on just the symbolic link or on all files in the hierarchy rooted at that directory.

These options are independent of --dereference and --no-dereference ( -h ), which control whether to modify a symlink or its referent.

‘ -H ’ If --recursive ( -R ) is specified and a command line argument is a symbolic link to a directory, traverse it. ‘ -L ’ In a recursive traversal, traverse every symbolic link to a directory that is encountered. ‘ -P ’ Do not traverse any symbolic links. This is the default if none of -H , -L , or -P is specified.

2.11 Treating / specially

Certain commands can operate destructively on entire hierarchies. For example, if a user with appropriate privileges mistakenly runs ‘ rm -rf / tmp/junk ’, that may remove all files on the entire system. Since there are so few legitimate uses for such a command, GNU rm normally declines to operate on any directory that resolves to / . If you really want to try to remove all the files on your system, you can use the --no-preserve-root option, but the default behavior, specified by the --preserve-root option, is safer for most purposes.

The commands chgrp , chmod and chown can also operate destructively on entire hierarchies, so they too support these options. Although, unlike rm , they don’t actually unlink files, these commands are arguably more dangerous when operating recursively on / , since they often work much more quickly, and hence damage more files before an alert user can interrupt them. Tradition and POSIX require these commands to operate recursively on / , so they default to --no-preserve-root , but using the --preserve-root option makes them safer for most purposes. For convenience you can specify --preserve-root in an alias or in a shell function.

Note that the --preserve-root option also ensures that chgrp and chown do not modify / even when dereferencing a symlink pointing to / .

2.12 Special built-in utilities

Some programs like nice can invoke other programs; for example, the command ‘ nice cat file ’ invokes the program cat by executing the command ‘ cat file ’. However, special built-in utilities like exit cannot be invoked this way. For example, the command ‘ nice exit ’ does not have a well-defined behavior: it may generate an error message instead of exiting.

Here is a list of the special built-in utilities that are standardized by POSIX 1003.1-2004.

. : break continue eval exec exit export readonly return set shift times trap unset

For example, because ‘ . ’, ‘ : ’, and ‘ exec ’ are special, the commands ‘ nice . foo.sh ’, ‘ nice : ’, and ‘ nice exec pwd ’ do not work as you might expect.

Many shells extend this list. For example, Bash has several extra special built-in utilities like history , and suspend , and with Bash the command ‘ nice suspend ’ generates an error message instead of suspending.

2.13 Standards conformance

In a few cases, the GNU utilities’ default behavior is incompatible with the POSIX standard. To suppress these incompatibilities, define the POSIXLY_CORRECT environment variable. Unless you are checking for POSIX conformance, you probably do not need to define POSIXLY_CORRECT .

Newer versions of POSIX are occasionally incompatible with older versions. For example, older versions of POSIX required the command ‘ sort +1 ’ to sort based on the second and succeeding fields in each input line, but in POSIX 1003.1-2001 the same command is required to sort the file named +1 , and you must instead use the command ‘ sort -k 2 ’ to get the field-based sort. To complicate things further, POSIX 1003.1-2008 allows an implementation to have either the old or the new behavior.

The GNU utilities normally conform to the version of POSIX that is standard for your system. To cause them to conform to a different version of POSIX, define the _POSIX2_VERSION environment variable to a value of the form yyyymm specifying the year and month the standard was adopted. Three values are currently supported for _POSIX2_VERSION : ‘ 199209 ’ stands for POSIX 1003.2-1992, ‘ 200112 ’ stands for POSIX 1003.1-2001, and ‘ 200809 ’ stands for POSIX 1003.1-2008. For example, if you have a POSIX 1003.1-2001 system but are running software containing traditional usage like ‘ sort +1 ’ or ‘ tail +10 ’, you can work around the compatibility problems by setting ‘ _POSIX2_VERSION=200809 ’ in your environment.

2.14 coreutils : Multi-call program

The coreutils command invokes an individual utility, either implicitly selected by the last component of the name used to invoke coreutils , or explicitly with the --coreutils-prog option. Synopsis:

coreutils --coreutils-prog=PROGRAM …

The coreutils command is not installed by default, so portable scripts should not rely on its existence.

3 Output of entire files

These commands read and write entire files, possibly transforming them in some way.

3.1 cat : Concatenate and write files

cat copies each file (‘ - ’ means standard input), or standard input if none are given, to standard output. Synopsis:

cat [ option ] [ file ]…

The program accepts the following options. Also see Common options.

‘ -A ’ ‘ --show-all ’ Equivalent to -vET . ‘ -b ’ ‘ --number-nonblank ’ Number all nonempty output lines, starting with 1. ‘ -e ’ Equivalent to -vE . ‘ -E ’ ‘ --show-ends ’ Display a ‘ $ ’ after the end of each line. ‘ -n ’ ‘ --number ’ Number all output lines, starting with 1. This option is ignored if -b is in effect. ‘ -s ’ ‘ --squeeze-blank ’ Suppress repeated adjacent blank lines; output just one empty line instead of several. ‘ -t ’ Equivalent to -vT . ‘ -T ’ ‘ --show-tabs ’ Display TAB characters as ‘ ^I ’. ‘ -u ’ Ignored; for POSIX compatibility. ‘ -v ’ ‘ --show-nonprinting ’ Display control characters except for LFD and TAB using ‘ ^ ’ notation and precede characters that have the high bit set with ‘ M- ’.

On systems like MS-DOS that distinguish between text and binary files, cat normally reads and writes in binary mode. However, cat reads in text mode if one of the options -bensAE is used or if cat is reading from standard input and standard input is a terminal. Similarly, cat writes in text mode if one of the options -bensAE is used or if standard output is a terminal.

An exit status of zero indicates success, and a nonzero value indicates failure.

Examples:

# Output f's contents, then standard input, then g's contents. cat f - g # Copy standard input to standard output. cat

3.2 tac : Concatenate and write files in reverse

tac copies each file (‘ - ’ means standard input), or standard input if none are given, to standard output, reversing the records (lines by default) in each separately. Synopsis:

tac [ option ]… [ file ]…

Records are separated by instances of a string (newline by default). By default, this separator string is attached to the end of the record that it follows in the file.

The program accepts the following options. Also see Common options.

‘ -b ’ ‘ --before ’ The separator is attached to the beginning of the record that it precedes in the file. ‘ -r ’ ‘ --regex ’ Treat the separator string as a regular expression. ‘ -s separator ’ ‘ --separator= separator ’ Use separator as the record separator, instead of newline. Note an empty separator is treated as a zero byte. I.e., input and output items are delimited with ASCII NUL.

On systems like MS-DOS that distinguish between text and binary files, tac reads and writes in binary mode.

An exit status of zero indicates success, and a nonzero value indicates failure.

Example:

# Reverse a file character by character. tac -r -s 'x\|[^x]'

3.3 nl : Number lines and write files

nl writes each file (‘ - ’ means standard input), or standard input if none are given, to standard output, with line numbers added to some or all of the lines. Synopsis:

nl [ option ]… [ file ]…

nl decomposes its input into (logical) page sections; by default, the line number is reset to 1 at each logical page section. nl treats all of the input files as a single document; it does not reset line numbers or logical pages between files.

A logical page consists of three sections: header, body, and footer. Any of the sections can be empty. Each can be numbered in a different style from the others.

The beginnings of the sections of logical pages are indicated in the input file by a line containing exactly one of these delimiter strings:

‘ \:\:\: ’ start of header; ‘ \:\: ’ start of body; ‘ \: ’ start of footer.

The two characters from which these strings are made can be changed from ‘ \ ’ and ‘ : ’ via options (see below), but the pattern and length of each string cannot be changed.

A section delimiter is replaced by an empty line on output. Any text that comes before the first section delimiter string in the input file is considered to be part of a body section, so nl treats a file that contains no section delimiters as a single body section.

The program accepts the following options. Also see Common options.

‘ -b style ’ ‘ --body-numbering= style ’ Select the numbering style for lines in the body section of each logical page. When a line is not numbered, the current line number is not incremented, but the line number separator character is still prepended to the line. The styles are: ‘ a ’ number all lines, ‘ t ’ number only nonempty lines (default for body), ‘ n ’ do not number lines (default for header and footer), ‘ p bre ’ number only lines that contain a match for the basic regular expression bre . See Regular Expressions in The GNU Grep Manual . ‘ -d cd ’ ‘ --section-delimiter= cd ’ Set the section delimiter characters to cd ; default is ‘ \: ’. If only c is given, the second remains ‘ : ’. (Remember to protect ‘ \ ’ or other metacharacters from shell expansion with quotes or extra backslashes.) ‘ -f style ’ ‘ --footer-numbering= style ’ Analogous to --body-numbering . ‘ -h style ’ ‘ --header-numbering= style ’ Analogous to --body-numbering . ‘ -i number ’ ‘ --line-increment= number ’ Increment line numbers by number (default 1). ‘ -l number ’ ‘ --join-blank-lines= number ’ Consider number (default 1) consecutive empty lines to be one logical line for numbering, and only number the last one. Where fewer than number consecutive empty lines occur, do not number them. An empty line is one that contains no characters, not even spaces or tabs. ‘ -n format ’ ‘ --number-format= format ’ Select the line numbering format (default is rn ): ‘ ln ’ left justified, no leading zeros; ‘ rn ’ right justified, no leading zeros; ‘ rz ’ right justified, leading zeros. ‘ -p ’ ‘ --no-renumber ’ Do not reset the line number at the start of a logical page. ‘ -s string ’ ‘ --number-separator= string ’ Separate the line number from the text line in the output with string (default is the TAB character). ‘ -v number ’ ‘ --starting-line-number= number ’ Set the initial line number on each logical page to number (default 1). ‘ -w number ’ ‘ --number-width= number ’ Use number characters for line numbers (default 6).

An exit status of zero indicates success, and a nonzero value indicates failure.

3.4 od : Write files in octal or other formats

od writes an unambiguous representation of each file (‘ - ’ means standard input), or standard input if none are given. Synopses:

od [ option ]… [ file ]… od [-abcdfilosx]… [ file ] [[+] offset [.][b]] od [ option ]… --traditional [ file ] [[+] offset [.][b] [[+] label [.][b]]]

Each line of output consists of the offset in the input, followed by groups of data from the file. By default, od prints the offset in octal, and each group of file data is a C short int ’s worth of input printed as a single octal number.

If offset is given, it specifies how many input bytes to skip before formatting and writing. By default, it is interpreted as an octal number, but the optional trailing decimal point causes it to be interpreted as decimal. If no decimal is specified and the offset begins with ‘ 0x ’ or ‘ 0X ’ it is interpreted as a hexadecimal number. If there is a trailing ‘ b ’, the number of bytes skipped will be offset multiplied by 512.

If a command is of both the first and second forms, the second form is assumed if the last operand begins with ‘ + ’ or (if there are two operands) a digit. For example, in ‘ od foo 10 ’ and ‘ od +10 ’ the ‘ 10 ’ is an offset, whereas in ‘ od 10 ’ the ‘ 10 ’ is a file name.

The program accepts the following options. Also see Common options.

‘ -A radix ’ ‘ --address-radix= radix ’ Select the base in which file offsets are printed. radix can be one of the following: ‘ d ’ decimal; ‘ o ’ octal; ‘ x ’ hexadecimal; ‘ n ’ none (do not print offsets). The default is octal. ‘ --endian= order ’ Reorder input bytes, to handle inputs with differing byte orders, or to provide consistent output independent of the endian convention of the current system. Swapping is performed according to the specified --type size and endian order , which can be ‘ little ’ or ‘ big ’. ‘ -j bytes ’ ‘ --skip-bytes= bytes ’ Skip bytes input bytes before formatting and writing. If bytes begins with ‘ 0x ’ or ‘ 0X ’, it is interpreted in hexadecimal; otherwise, if it begins with ‘ 0 ’, in octal; otherwise, in decimal. bytes may be, or may be an integer optionally followed by, one of the following multiplicative suffixes: ‘ b ’ => 512 ("blocks") ‘ KB ’ => 1000 (KiloBytes) ‘ K ’ => 1024 (KibiBytes) ‘ MB ’ => 1000*1000 (MegaBytes) ‘ M ’ => 1024*1024 (MebiBytes) ‘ GB ’ => 1000*1000*1000 (GigaBytes) ‘ G ’ => 1024*1024*1024 (GibiBytes) and so on for ‘ T ’, ‘ P ’, ‘ E ’, ‘ Z ’, and ‘ Y ’. Binary prefixes can be used, too: ‘ KiB ’=‘ K ’, ‘ MiB ’=‘ M ’, and so on. ‘ -N bytes ’ ‘ --read-bytes= bytes ’ Output at most bytes bytes of the input. Prefixes and suffixes on bytes are interpreted as for the -j option. ‘ -S bytes ’ ‘ --strings[= bytes ] ’ Instead of the normal output, output only string constants: at least bytes consecutive ASCII graphic characters, followed by a zero byte (ASCII NUL). Prefixes and suffixes on bytes are interpreted as for the -j option. If bytes is omitted with --strings , the default is 3. ‘ -t type ’ ‘ --format= type ’ Select the format in which to output the file data. type is a string of one or more of the below type indicator characters. If you include more than one type indicator character in a single type string, or use this option more than once, od writes one copy of each output line using each of the data types that you specified, in the order that you specified. Adding a trailing “z” to any type specification appends a display of the single byte character representation of the printable characters to the output line generated by the type specification. ‘ a ’ named character, ignoring high-order bit ‘ c ’ printable single byte character, C backslash escape or a 3 digit octal sequence ‘ d ’ signed decimal ‘ f ’ floating point (see Floating point) ‘ o ’ octal ‘ u ’ unsigned decimal ‘ x ’ hexadecimal The type a outputs things like ‘ sp ’ for space, ‘ nl ’ for newline, and ‘ nul ’ for a zero byte. Only the least significant seven bits of each byte is used; the high-order bit is ignored. Type c outputs ‘ ’, ‘

’, and \0 , respectively. Except for types ‘ a ’ and ‘ c ’, you can specify the number of bytes to use in interpreting each number in the given data type by following the type indicator character with a decimal integer. Alternately, you can specify the size of one of the C compiler’s built-in data types by following the type indicator character with one of the following characters. For integers (‘ d ’, ‘ o ’, ‘ u ’, ‘ x ’): ‘ C ’ char ‘ S ’ short ‘ I ’ int ‘ L ’ long For floating point ( f ): F float D double L long double ‘ -v ’ ‘ --output-duplicates ’ Output consecutive lines that are identical. By default, when two or more consecutive output lines would be identical, od outputs only the first line, and puts just an asterisk on the following line to indicate the elision. ‘ -w[ n ] ’ ‘ --width[= n ] ’ Dump n input bytes per output line. This must be a multiple of the least common multiple of the sizes associated with the specified output types. If this option is not given at all, the default is 16. If n is omitted, the default is 32.

The next several options are shorthands for format specifications. GNU od accepts any combination of shorthands and format specification options. These options accumulate.

‘ -a ’ Output as named characters. Equivalent to ‘ -t a ’. ‘ -b ’ Output as octal bytes. Equivalent to ‘ -t o1 ’. ‘ -c ’ Output as printable single byte characters, C backslash escapes or 3 digit octal sequences. Equivalent to ‘ -t c ’. ‘ -d ’ Output as unsigned decimal two-byte units. Equivalent to ‘ -t u2 ’. ‘ -f ’ Output as floats. Equivalent to ‘ -t fF ’. ‘ -i ’ Output as decimal ints. Equivalent to ‘ -t dI ’. ‘ -l ’ Output as decimal long ints. Equivalent to ‘ -t dL ’. ‘ -o ’ Output as octal two-byte units. Equivalent to -t o2 . ‘ -s ’ Output as decimal two-byte units. Equivalent to -t d2 . ‘ -x ’ Output as hexadecimal two-byte units. Equivalent to ‘ -t x2 ’. ‘ --traditional ’ Recognize the non-option label argument that traditional od accepted. The following syntax: od --traditional [ file ] [[+] offset [.][b] [[+] label [.][b]]] can be used to specify at most one file and optional arguments specifying an offset and a pseudo-start address, label . The label argument is interpreted just like offset , but it specifies an initial pseudo-address. The pseudo-addresses are displayed in parentheses following any normal address.

An exit status of zero indicates success, and a nonzero value indicates failure.

3.5 base32 : Transform data into printable data

base32 transforms data read from a file, or standard input, into (or from) base32 encoded form. The base32 encoded form uses printable ASCII characters to represent binary data. The usage and options of this command are precisely the same as for base64 . See base64 invocation.

3.6 base64 : Transform data into printable data

base64 transforms data read from a file, or standard input, into (or from) base64 encoded form. The base64 encoded form uses printable ASCII characters to represent binary data. Synopses:

base64 [ option ]… [ file ] base64 --decode [ option ]… [ file ]

The base64 encoding expands data to roughly 133% of the original. The base32 encoding expands data to roughly 160% of the original. The format conforms to RFC 4648.

The program accepts the following options. Also see Common options.

‘ -w cols ’ ‘ --wrap= cols ’ During encoding, wrap lines after cols characters. This must be a positive number. The default is to wrap after 76 characters. Use the value 0 to disable line wrapping altogether. ‘ -d ’ ‘ --decode ’ Change the mode of operation, from the default of encoding data, to decoding data. Input is expected to be base64 encoded data, and the output will be the original data. ‘ -i ’ ‘ --ignore-garbage ’ When decoding, newlines are always accepted. During decoding, ignore unrecognized bytes, to permit distorted data to be decoded.

An exit status of zero indicates success, and a nonzero value indicates failure.

3.7 basenc : Transform data into printable data

basenc transforms data read from a file, or standard input, into (or from) various common encoding forms. The encoded form uses printable ASCII characters to represent binary data.

Synopses:

basenc encoding [ option ]… [ file ] basenc encoding --decode [ option ]… [ file ]

The encoding argument is required. If file is omitted, reads input from stdin. The -w/--wrap , -i/--ignore-garbage , -d/--decode options of this command are precisely the same as for base64 . See base64 invocation.

Supported encoding s are:

‘ --base64 ’ Encode into (or decode from with -d/--decode ) base64 form. The format conforms to RFC 4648#4. Equivalent to the base64 command. ‘ --base64url ’ Encode into (or decode from with -d/--decode ) file-and-url-safe base64 form (using ‘ _ ’ and ‘ - ’ instead of ‘ + ’ and ‘ / ’). The format conforms to RFC 4648#5. ‘ --base32 ’ Encode into (or decode from with -d/--decode ) base32 form. The encoded data uses the ‘ ABCDEFGHIJKLMNOPQRSTUVWXYZ234567= ’ characters. The format conforms to RFC 4648#6. Equivalent to the base32 command. ‘ --base32hex ’ Encode into (or decode from with -d/--decode ) Extended Hex Alphabet base32 form. The encoded data uses the ‘ 0123456789ABCDEFGHIJKLMNOPQRSTUV= ’ characters. The format conforms to RFC 4648#7. ‘ --base16 ’ Encode into (or decode from with -d/--decode ) base16 (hexadecimal) form. The encoded data uses the ‘ 0123456789ABCDEF ’ characters. The format conforms to RFC 4648#8. ‘ --base2lsbf ’ Encode into (or decode from with -d/--decode ) binary string form (‘ 0 ’ and ‘ 1 ’) with the least significant bit of every byte first. ‘ --base2msbf ’ Encode into (or decode from with -d/--decode ) binary string form (‘ 0 ’ and ‘ 1 ’) with the most significant bit of every byte first. ‘ --z85 ’ Encode into (or decode from with -d/--decode ) Z85 form (a modified Ascii85 form). The encoded data uses the ‘ 0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTU VWXYZ.-:+=^!/*?&<>()[]{}@%$# ’. characters. The format conforms to ZeroMQ spec:32/Z85. When encoding with --z85 , input length must be a multiple of 4; when decoding with --z85 , input length must be a multiple of 5.

Encoding/decoding examples:

$ printf '\376\117\202' | basenc --base64 /k+C $ printf '\376\117\202' | basenc --base64url _k-C $ printf '\376\117\202' | basenc --base32 7ZHYE=== $ printf '\376\117\202' | basenc --base32hex VP7O4=== $ printf '\376\117\202' | basenc --base16 FE4F82 $ printf '\376\117\202' | basenc --base2lsbf 011111111111001001000001 $ printf '\376\117\202' | basenc --base2msbf 111111100100111110000010 $ printf '\376\117\202\000' | basenc --z85 @.FaC $ printf 01010100 | basenc --base2msbf --decode T $ printf 01010100 | basenc --base2lsbf --decode *

4 Formatting file contents

These commands reformat the contents of files.

• fmt invocation Reformat paragraph text. • pr invocation Paginate or columnate files for printing. • fold invocation Wrap input lines to fit in specified width.

4.1 fmt : Reformat paragraph text

fmt fills and joins lines to produce output lines of (at most) a given number of characters (75 by default). Synopsis:

fmt [ option ]… [ file ]…

fmt reads from the specified file arguments (or standard input if none are given), and writes to standard output.

By default, blank lines, spaces between words, and indentation are preserved in the output; successive input lines with different indentation are not joined; tabs are expanded on input and introduced on output.

fmt prefers breaking lines at the end of a sentence, and tries to avoid line breaks after the first word of a sentence or before the last word of a sentence. A sentence break is defined as either the end of a paragraph or a word ending in any of ‘ .?! ’, followed by two spaces or end of line, ignoring any intervening parentheses or quotes. Like TeX, fmt reads entire “paragraphs” before choosing line breaks; the algorithm is a variant of that given by Donald E. Knuth and Michael F. Plass in “Breaking Paragraphs Into Lines”, Software—Practice & Experience 11, 11 (November 1981), 1119–1184.

The program accepts the following options. Also see Common options.

‘ -c ’ ‘ --crown-margin ’ Crown margin mode: preserve the indentation of the first two lines within a paragraph, and align the left margin of each subsequent line with that of the second line. ‘ -t ’ ‘ --tagged-paragraph ’ Tagged paragraph mode: like crown margin mode, except that if indentation of the first line of a paragraph is the same as the indentation of the second, the first line is treated as a one-line paragraph. ‘ -s ’ ‘ --split-only ’ Split lines only. Do not join short lines to form longer ones. This prevents sample lines of code, and other such “formatted” text from being unduly combined. ‘ -u ’ ‘ --uniform-spacing ’ Uniform spacing. Reduce spacing between words to one space, and spacing between sentences to two spaces. ‘ - width ’ ‘ -w width ’ ‘ --width= width ’ Fill output lines up to width characters (default 75 or goal plus 10, if goal is provided). ‘ -g goal ’ ‘ --goal= goal ’ fmt initially tries to make lines goal characters wide. By default, this is 7% shorter than width . ‘ -p prefix ’ ‘ --prefix= prefix ’ Only lines beginning with prefix (possibly preceded by whitespace) are subject to formatting. The prefix and any preceding whitespace are stripped for the formatting and then re-attached to each formatted output line. One use is to format certain kinds of program comments, while leaving the code unchanged.

An exit status of zero indicates success, and a nonzero value indicates failure.

4.2 pr : Paginate or columnate files for printing

pr writes each file (‘ - ’ means standard input), or standard input if none are given, to standard output, paginating and optionally outputting in multicolumn format; optionally merges all file s, printing all in parallel, one per column. Synopsis:

pr [ option ]… [ file ]…

By default, a 5-line header is printed at each page: two blank lines; a line with the date, the file name, and the page count; and two more blank lines. A footer of five blank lines is also printed. The default page_length is 66 lines. The default number of text lines is therefore 56. The text line of the header takes the form ‘ date string page ’, with spaces inserted around string so that the line takes up the full page_width . Here, date is the date (see the -D or --date-format option for details), string is the centered header string, and page identifies the page number. The LC_MESSAGES locale category affects the spelling of page ; in the default C locale, it is ‘ Page number ’ where number is the decimal page number.

Form feeds in the input cause page breaks in the output. Multiple form feeds produce empty pages.

Columns are of equal width, separated by an optional string (default is ‘ space ’). For multicolumn output, lines will always be truncated to page_width (default 72), unless you use the -J option. For single column output no line truncation occurs by default. Use -W option to truncate lines in that case.

The program accepts the following options. Also see Common options.

‘ + first_page [: last_page ] ’ ‘ --pages= first_page [: last_page ] ’ Begin printing with page first_page and stop with last_page . Missing ‘ : last_page ’ implies end of file. While estimating the number of skipped pages each form feed in the input file results in a new page. Page counting with and without ‘ + first_page ’ is identical. By default, counting starts with the first page of input file (not first page printed). Line numbering may be altered by -N option. ‘ - column ’ ‘ --columns= column ’ With each single file , produce column columns of output (default is 1) and print columns down, unless -a is used. The column width is automatically decreased as column increases; unless you use the -W/-w option to increase page_width as well. This option might well cause some lines to be truncated. The number of lines in the columns on each page are balanced. The options -e and -i are on for multiple text-column output. Together with -J option column alignment and line truncation is turned off. Lines of full length are joined in a free field format and -S option may set field separators. - column may not be used with -m option. ‘ -a ’ ‘ --across ’ With each single file , print columns across rather than down. The - column option must be given with column greater than one. If a line is too long to fit in a column, it is truncated. ‘ -c ’ ‘ --show-control-chars ’ Print control characters using hat notation (e.g., ‘ ^G ’); print other nonprinting characters in octal backslash notation. By default, nonprinting characters are not changed. ‘ -d ’ ‘ --double-space ’ Double space the output. ‘ -D format ’ ‘ --date-format= format ’ Format header dates using format , using the same conventions as for the command ‘ date + format ’. See date invocation. Except for directives, which start with ‘ % ’, characters in format are printed unchanged. You can use this option to specify an arbitrary string in place of the header date, e.g., --date-format="Monday morning" . The default date format is ‘ %Y-%m-%d %H:%M ’ (for example, ‘ 2001-12-04 23:59 ’); but if the POSIXLY_CORRECT environment variable is set and the LC_TIME locale category specifies the POSIX locale, the default is ‘ %b %e %H:%M %Y ’ (for example, ‘ Dec 4 23:59 2001 ’. Timestamps are listed according to the time zone rules specified by the TZ environment variable, or by the system default rules if TZ is not set. See Specifying the Time Zone with TZ in The GNU C Library Reference Manual . ‘ -e[ in-tabchar [ in-tabwidth ]] ’ ‘ --expand-tabs[= in-tabchar [ in-tabwidth ]] ’ Expand tab s to spaces on input. Optional argument in-tabchar is the input tab character (default is the TAB character). Second optional argument in-tabwidth is the input tab character’s width (default is 8). ‘ -f ’ ‘ -F ’ ‘ --form-feed ’ Use a form feed instead of newlines to separate output pages. This does not alter the default page length of 66 lines. ‘ -h header ’ ‘ --header= header ’ Replace the file name in the header with the centered string header . When using the shell, header should be quoted and should be separated from -h by a space. ‘ -i[ out-tabchar [ out-tabwidth ]] ’ ‘ --output-tabs[= out-tabchar [ out-tabwidth ]] ’ Replace spaces with tab s on output. Optional argument out-tabchar is the output tab character (default is the TAB character). Second optional argument out-tabwidth is the output tab character’s width (default is 8). ‘ -J ’ ‘ --join-lines ’ Merge lines of full length. Used together with the column options - column , -a - column or -m . Turns off -W/-w line truncation; no column alignment used; may be used with --sep-string[= string ] . -J has been introduced (together with -W and --sep-string ) to disentangle the old (POSIX-compliant) options -w and -s along with the three column options. ‘ -l page_length ’ ‘ --length= page_length ’ Set the page length to page_length (default 66) lines, including the lines of the header [and the footer]. If page_length is less than or equal to 10, the header and footer are omitted, as if the -t option had been given. ‘ -m ’ ‘ --merge ’ Merge and print all file s in parallel, one in each column. If a line is too long to fit in a column, it is truncated, unless the -J option is used. --sep-string[= string ] may be used. Empty pages in some file s (form feeds set) produce empty columns, still marked by string . The result is a continuous line numbering and column marking throughout the whole merged file. Completely empty merged pages show no separators or line numbers. The default header becomes ‘ date page ’ with spaces inserted in the middle; this may be used with the -h or --header option to fill up the middle blank part. ‘ -n[ number-separator [ digits ]] ’ ‘ --number-lines[= number-separator [ digits ]] ’ Provide digits digit line numbering (default for digits is 5). With multicolumn output the number occupies the first digits column positions of each text column or only each line of -m output. With single column output the number precedes each line just as -m does. Default counting of the line numbers starts with the first line of the input file (not the first line printed, compare the --page option and -N option). Optional argument number-separator is the character appended to the line number to separate it from the text followed. The default separator is the TAB character. In a strict sense a TAB is always printed with single column output only. The TAB width varies with the TAB position, e.g., with the left margin specified by -o option. With multicolumn output priority is given to ‘ equal width of output columns ’ (a POSIX specification). The TAB width is fixed to the value of the first column and does not change with different values of left margin . That means a fixed number of spaces is always printed in the place of the number-separator TAB. The tabification depends upon the output position. ‘ -N line_number ’ ‘ --first-line-number= line_number ’ Start line counting with the number line_number at first line of first page printed (in most cases not the first line of the input file). ‘ -o margin ’ ‘ --indent= margin ’ Indent each line with a margin margin spaces wide (default is zero). The total page width is the size of the margin plus the page_width set with the -W/-w option. A limited overflow may occur with numbered single column output (compare -n option). ‘ -r ’ ‘ --no-file-warnings ’ Do not print a warning message when an argument file cannot be opened. (The exit status will still be nonzero, however.) ‘ -s[ char ] ’ ‘ --separator[= char ] ’ Separate columns by a single character char . The default for char is the TAB character without -w and ‘ no character ’ with -w . Without -s the default separator ‘ space ’ is set. -s[char] turns off line truncation of all three column options ( -COLUMN | -a -COLUMN | -m ) unless -w is set. This is a POSIX-compliant formulation. ‘ -S[ string ] ’ ‘ --sep-string[= string ] ’ Use string to separate output columns. The -S option doesn’t affect the -W/-w option, unlike the -s option which does. It does not affect line truncation or column alignment. Without -S , and with -J , pr uses the default output separator, TAB. Without -S or -J , pr uses a ‘ space ’ (same as -S" " ). If no ‘ string ’ argument is specified, ‘ "" ’ is assumed. ‘ -t ’ ‘ --omit-header ’ Do not print the usual header [and footer] on each page, and do not fill out the bottom of pages (with blank lines or a form feed). No page structure is produced, but form feeds set in the input files are retained. The predefined pagination is not changed. -t or -T may be useful together with other options; e.g.: -t -e4 , expand TAB characters in the input file to 4 spaces but don’t make any other changes. Use of -t overrides -h . ‘ -T ’ ‘ --omit-pagination ’ Do not print header [and footer]. In addition eliminate all form feeds set in the input files. ‘ -v ’ ‘ --show-nonprinting ’ Print nonprinting characters in octal backslash notation. ‘ -w page_width ’ ‘ --width= page_width ’ Set page width to page_width characters for multiple text-column output only (default for page_width is 72). The specified page_width is rounded down so that columns have equal width. -s[CHAR] turns off the default page width and any line truncation and column alignment. Lines of full length are merged, regardless of the column options set. No page_width setting is possible with single column output. A POSIX-compliant formulation. ‘ -W page_width ’ ‘ --page_width= page_width ’ Set the page width to page_width characters, honored with and without a column option. With a column option, the specified page_width is rounded down so that columns have equal width. Text lines are truncated, unless -J is used. Together with one of the three column options ( - column , -a - column or -m ) column alignment is always used. The separator options -S or -s don’t disable the -W option. Default is 72 characters. Without -W page_width and without any of the column options NO line truncation is used (defined to keep downward compatibility and to meet most frequent tasks). That’s equivalent to -W 72 -J . The header line is never truncated.

An exit status of zero indicates success, and a nonzero value indicates failure.

4.3 fold : Wrap input lines to fit in specified width

fold writes each file ( - means standard input), or standard input if none are given, to standard output, breaking long lines. Synopsis:

fold [ option ]… [ file ]…

By default, fold breaks lines wider than 80 columns. The output is split into as many lines as necessary.

fold counts screen columns by default; thus, a tab may count more than one column, backspace decreases the column count, and carriage return sets the column to zero.

The program accepts the following options. Also see Common options.

‘ -b ’ ‘ --bytes ’ Count bytes rather than columns, so that tabs, backspaces, and carriage returns are each counted as taking up one column, just like other characters. ‘ -s ’ ‘ --spaces ’ Break at word boundaries: the line is broken after the last blank before the maximum line length. If the line contains no such blanks, the line is broken at the maximum line length as usual. ‘ -w width ’ ‘ --width= width ’ Use a maximum line length of width columns instead of 80. For compatibility fold supports an obsolete option syntax - width . New scripts should use -w width instead.

An exit status of zero indicates success, and a nonzero value indicates failure.

5 Output of parts of files

These commands output pieces of the input.

• head invocation Output the first part of files. • tail invocation Output the last part of files. • split invocation Split a file into pieces. • csplit invocation Split a file into context-determined pieces.

5.1 head : Output the first part of files

head prints the first part (10 lines by default) of each file ; it reads from standard input if no files are given or when given a file of - . Synopsis:

head [ option ]… [ file ]…

If more than one file is specified, head prints a one-line header consisting of:

==> file name <==

before the output for each file .

The program accepts the following options. Also see Common options.

‘ -c [-] num ’ ‘ --bytes=[-] num ’ Print the first num bytes, instead of initial lines. However, if num is prefixed with a ‘ - ’, print all but the last num bytes of each file. num may be, or may be an integer optionally followed by, one of the following multiplicative suffixes: ‘ b ’ => 512 ("blocks") ‘ KB ’ => 1000 (KiloBytes) ‘ K ’ => 1024 (KibiBytes) ‘ MB ’ => 1000*1000 (MegaBytes) ‘ M ’ => 1024*1024 (MebiBytes) ‘ GB ’ => 1000*1000*1000 (GigaBytes) ‘ G ’ => 1024*1024*1024 (GibiBytes) and so on for ‘ T ’, ‘ P ’, ‘ E ’, ‘ Z ’, and ‘ Y ’. Binary prefixes can be used, too: ‘ KiB ’=‘ K ’, ‘ MiB ’=‘ M ’, and so on. ‘ -n [-] num ’ ‘ --lines=[-] num ’ Output the first num lines. However, if num is prefixed with a ‘ - ’, print all but the last num lines of each file. Size multiplier suffixes are the same as with the -c option. ‘ -q ’ ‘ --quiet ’ ‘ --silent ’ Never print file name headers. ‘ -v ’ ‘ --verbose ’ Always print file name headers. ‘ -z ’ ‘ --zero-terminated ’ Delimit items with a zero byte rather than a newline (ASCII LF). I.e., treat input as items separated by ASCII NUL and terminate output items with ASCII NUL. This option can be useful in conjunction with ‘ perl -0 ’ or ‘ find -print0 ’ and ‘ xargs -0 ’ which do the same in order to reliably handle arbitrary file names (even those containing blanks or other special characters).

For compatibility head also supports an obsolete option syntax -[ num ][bkm][cqv] , which is recognized only if it is specified first. num is a decimal number optionally followed by a size letter (‘ b ’, ‘ k ’, ‘ m ’) as in -c , or ‘ l ’ to mean count by lines, or other option letters (‘ cqv ’). Scripts intended for standard hosts should use -c num or -n num instead. If your script must also run on hosts that support only the obsolete syntax, it is usually simpler to avoid head , e.g., by using ‘ sed 5q ’ instead of ‘ head -5 ’.

An exit status of zero indicates success, and a nonzero value indicates failure.

5.2 tail : Output the last part of files

tail prints the last part (10 lines by default) of each file ; it reads from standard input if no files are given or when given a file of ‘ - ’. Synopsis:

tail [ option ]… [ file ]…

If more than one file is specified, tail prints a one-line header before the output for each file , consisting of:

==> file name <==

For further processing of tail output, it can be useful to convert the file headers to line prefixes, which can be done like:

tail … | awk ' /^==> .* <==$/ {prefix=substr($0,5,length-8)":"; next} {print prefix$0} ' | …

GNU tail can output any amount of data (some other versions of tail cannot). It also has no -r option (print in reverse), since reversing a file is really a different job from printing the end of a file; BSD tail (which is the one with -r ) can only reverse files that are at most as large as its buffer, which is typically 32 KiB. A more reliable and versatile way to reverse files is the GNU tac command.

The program accepts the following options. Also see Common options.

‘ -c [+] num ’ ‘ --bytes=[+] num ’ Output the last num bytes, instead of final lines. However, if num is prefixed with a ‘ + ’, start printing with byte num from the start of each file, instead of from the end. num may be, or may be an integer optionally followed by, one of the following multiplicative suffixes: ‘ b ’ => 512 ("blocks") ‘ KB ’ => 1000 (KiloBytes) ‘ K ’ => 1024 (KibiBytes) ‘ MB ’ => 1000*1000 (MegaBytes) ‘ M ’ => 1024*1024 (MebiBytes) ‘ GB ’ => 1000*1000*1000 (GigaBytes) ‘ G ’ => 1024*1024*1024 (GibiBytes) and so on for ‘ T ’, ‘ P ’, ‘ E ’, ‘ Z ’, and ‘ Y ’. Binary prefixes can be used, too: ‘ KiB ’=‘ K ’, ‘ MiB ’=‘ M ’, and so on. ‘ -f ’ ‘ --follow[= how ] ’ Loop forever trying to read more characters at the end of the file, presumably because the file is growing. If more than one file is given, tail prints a header whenever it gets output from a different file, to indicate which file that output is from. There are two ways to specify how you’d like to track files with this option, but that difference is noticeable only when a followed file is removed or renamed. If you’d like to continue to track the end of a growing file even after it has been unlinked, use --follow=descriptor . This is the default behavior, but it is not useful if you’re tracking a log file that may be rotated (removed or renamed, then reopened). In that case, use --follow=name to track the named file, perhaps by reopening it periodically to see if it has been removed and recreated by some other program. Note that the inotify-based implementation handles this case without the need for any periodic reopening. No matter which method you use, if the tracked file is determined to have shrunk, tail prints a message saying the file has been truncated and resumes tracking from the start of the file, assuming it has been truncated to 0, which is the usual truncation operation for log files. When a file is removed, tail ’s behavior depends on whether it is following the name or the descriptor. When following by name, tail can detect that a file has been removed and gives a message to that effect, and if --retry has been specified it will continue checking periodically to see if the file reappears. When following a descriptor, tail does not detect that the file has been unlinked or renamed and issues no message; even though the file may no longer be accessible via its original name, it may still be growing. The option values ‘ descriptor ’ and ‘ name ’ may be specified only with the long form of the option, not with -f . The -f option is ignored if no file operand is specified and standard input is a FIFO or a pipe. Likewise, the -f option has no effect for any operand specified as ‘ - ’, when standard input is a FIFO or a pipe. With kernel inotify support, output is triggered by file changes and is generally very prompt. Otherwise, tail sleeps for one second between checks— use --sleep-interval= n to change that default—which can make the output appear slightly less responsive or bursty. When using tail without inotify support, you can make it more responsive by using a sub-second sleep interval, e.g., via an alias like this: alias tail='tail -s.1' ‘ -F ’ This option is the same as --follow=name --retry . That is, tail will attempt to reopen a file when it is removed. Should this fail, tail will keep trying until it becomes accessible again. ‘ --max-unchanged-stats= n ’ When tailing a file by name, if there have been n (default n=5) consecutive iterations for which the file has not changed, then open / fstat the file to determine if that file name is still associated with the same device/inode-number pair as before. When following a log file that is rotated, this is approximately the number of seconds between when tail prints the last pre-rotation lines and when it prints the lines that have accumulated in the new log file. This option is meaningful only when polling (i.e., without inotify) and when following by name. ‘ -n [+] num ’ ‘ --lines=[+] ’ Output the last num lines. However, if num is prefixed with a ‘ + ’, start printing with line num from the start of each file, instead of from the end. Size multiplier suffixes are the same as with the -c option. ‘ --pid= pid ’ When following by name or by descriptor, you may specify the process ID, pid , of the sole writer of all file arguments. Then, shortly after that process terminates, tail will also terminate. This will work properly only if the writer and the tailing process are running on the same machine. For example, to save the output of a build in a file and to watch the file grow, if you invoke make and tail like this then the tail process will stop when your build completes. Without this option, you would have had to kill the tail -f process yourself. $ make >& makerr & tail --pid=$! -f makerr If you specify a pid that is not in use or that does not correspond to the process that is writing to the tailed files, then tail may terminate long before any file s stop growing or it may not terminate until long after the real writer has terminated. Note that --pid cannot be supported on some systems; tail will print a warning if this is the case. ‘ -q ’ ‘ --quiet ’ ‘ --silent ’ Never print file name headers. ‘ --retry ’ Indefinitely try to open the specified file. This option is useful mainly when following (and otherwise issues a warning). When following by file descriptor (i.e., with --follow=descriptor ), this option only affects the initial open of the file, as after a successful open, tail will start following the file descriptor. When following by name (i.e., with --follow=name ), tail infinitely retries to re-open the given files until killed. Without this option, when tail encounters a file that doesn’t exist or is otherwise inaccessible, it reports that fact and never checks it again. ‘ -s number ’ ‘ --sleep-interval= number ’ Change the number of seconds to wait between iterations (the default is 1.0). During one iteration, every specified file is checked to see if it has changed size. When tail uses inotify, this polling-related option is usually ignored. However, if you also specify --pid= p , tail checks whether process p is alive at least every number seconds. The number must be non-negative and can be a floating-point number in either the current or the C locale. See Floating point. ‘ -v ’ ‘ --verbose ’ Always print file name headers. ‘ -z ’ ‘ --zero-terminated ’ Delimit items with a zero byte rather than a newline (ASCII LF). I.e., treat input as items separated by ASCII NUL and terminate output items with ASCII NUL. This option can be useful in conjunction with ‘ perl -0 ’ or ‘ find -print0 ’ and ‘ xargs -0 ’ which do the same in order to reliably handle arbitrary file names (even those containing blanks or other special characters).

For compatibility tail also supports an obsolete usage ‘ tail -[ num ][bcl][f] [ file ] ’, which is recognized only if it does not conflict with the usage described above. This obsolete form uses exactly one option and at most one file. In the option, num is an optional decimal number optionally followed by a size letter (‘ b ’, ‘ c ’, ‘ l ’) to mean count by 512-byte blocks, bytes, or lines, optionally followed by ‘ f ’ which has the same meaning as -f .

On systems not conforming to POSIX 1003.1-2001, the leading ‘ - ’ can be replaced by ‘ + ’ in the traditional option syntax with the same meaning as in counts, and on obsolete systems predating POSIX 1003.1-2001 traditional usage overrides normal usage when the two conflict. This behavior can be controlled with the _POSIX2_VERSION environment variable (see Standards conformance).

Scripts intended for use on standard hosts should avoid traditional syntax and should use -c num [b] , -n num , and/or -f instead. If your script must also run on hosts that support only the traditional syntax, you can often rewrite it to avoid problematic usages, e.g., by using ‘ sed -n '$p' ’ rather than ‘ tail -1 ’. If that’s not possible, the script can use a test like ‘ if tail -c +1 </dev/null >/dev/null 2>&1; then … ’ to decide which syntax to use.

Even if your script assumes the standard behavior, you should still beware usages whose behaviors differ depending on the POSIX version. For example, avoid ‘ tail - main.c ’, since it might be interpreted as either ‘ tail main.c ’ or as ‘ tail -- - main.c ’; avoid ‘ tail -c 4 ’, since it might mean either ‘ tail -c4 ’ or ‘ tail -c 10 4 ’; and avoid ‘ tail +4 ’, since it might mean either ‘ tail ./+4 ’ or ‘ tail -n +4 ’.

An exit status of zero indicates success, and a nonzero value indicates failure.

5.3 split : Split a file into pieces.

split creates output files containing consecutive or interleaved sections of input (standard input if none is given or input is ‘ - ’). Synopsis:

split [ option ] [ input [ prefix ]]

By default, split puts 1000 lines of input (or whatever is left over for the last section), into each output file.

The output files’ names consist of prefix (‘ x ’ by default) followed by a group of characters (‘ aa ’, ‘ ab ’, … by default), such that concatenating the output files in traditional sorted order by file name produces the original input file (except -nr/ n ). By default split will initially create files with two generated suffix characters, and will increase this width by two when the next most significant position reaches the last character. (‘ yz ’, ‘ zaaa ’, ‘ zaab ’, …). In this way an arbitrary number of output files are supported, which sort as described above, even in the presence of an --additional-suffix option. If the -a option is specified and the output file names are exhausted, split reports an error without deleting the output files that it did create.

The program accepts the following options. Also see Common options.

‘ -l lines ’ ‘ --lines= lines ’ Put lines lines of input into each output file. If --separator is specified, then lines determines the number of records. For compatibility split also supports an obsolete option syntax - lines . New scripts should use -l lines instead. ‘ -b size ’ ‘ --bytes= size ’ Put size bytes of input into each output file. size may be, or may be an integer optionally followed by, one of the following multiplicative suffixes: ‘ b ’ => 512 ("blocks") ‘ KB ’ => 1000 (KiloBytes) ‘ K ’ => 1024 (KibiBytes) ‘ MB ’ => 1000*1000 (MegaBytes) ‘ M ’ => 1024*1024 (MebiBytes) ‘ GB ’ => 1000*1000*1000 (GigaBytes) ‘ G ’ => 1024*1024*1024 (GibiBytes) and so on for ‘ T ’, ‘ P ’, ‘ E ’, ‘ Z ’, and ‘ Y ’. Binary prefixes can be used, too: ‘ KiB ’=‘ K ’, ‘ MiB ’=‘ M ’, and so on. ‘ -C size ’ ‘ --line-bytes= size ’ Put into each output file as many complete lines of input as possible without exceeding size bytes. Individual lines or records longer than size bytes are broken into multiple files. size has the same format as for the --bytes option. If --separator is specified, then lines determines the number of records. ‘ --filter= command ’ With this option, rather than simply writing to each output file, write through a pipe to the specified shell command for each output file. command should use the $FILE environment variable, which is set to a different output file name for each invocation of the command. For example, imagine that you have a 1TiB compressed file that, if uncompressed, would be too large to reside on disk, yet you must split it into individually-compressed pieces of a more manageable size. To do that, you might run this command: xz -dc BIG.xz | split -b200G --filter='xz > $FILE.xz' - big- Assuming a 10:1 compression ratio, that would create about fifty 20GiB files with names big-aa.xz , big-ab.xz , big-ac.xz , etc. ‘ -n chunks ’ ‘ --number= chunks ’ Split input to chunks output files where chunks may be: n generate n files based on current size of input k / n only output k th of n to stdout l/ n generate n files without splitting lines or records l/ k / n likewise but only output k th of n to stdout r/ n like ‘ l ’ but use round robin distribution r/ k / n likewise but only output k th of n to stdout Any excess bytes remaining after dividing the input into n chunks, are assigned to the last chunk. Any excess bytes appearing after the initial calculation are discarded (except when using ‘ r ’ mode). All n files are created even if there are fewer than n lines, or the input is truncated. For ‘ l ’ mode, chunks are approximately input size / n . The input is partitioned into n equal sized portions, with the last assigned any excess. If a line starts within a partition it is written completely to the corresponding file. Since lines or records are not split even if they overlap a partition, the files written can be larger or smaller than the partition size, and even empty if a line/record is so long as to completely overlap the partition. For ‘ r ’ mode, the size of input is irrelevant, and so can be a pipe for example. ‘ -a length ’ ‘ --suffix-length= length ’ Use suffixes of length length . If a length of 0 is specified, this is the same as if (any previous) -a was not specified, and thus enables the default behavior, which starts the suffix length at 2, and unless -n or --numeric-suffixes= from is specified, will auto increase the length by 2 as required. ‘ -d ’ ‘ --numeric-suffixes[= from ] ’ Use digits in suffixes rather than lower-case letters. The numerical suffix counts from from if specified, 0 otherwise. from is supported with the long form option, and is used to either set the initial suffix for a single run, or to set the suffix offset for independently split inputs, and consequently the auto suffix length expansion described above is disabled. Therefore you may also want to use option -a to allow suffixes beyond ‘ 99 ’. Note if option --number is specified and the number of files is less than from , a single run is assumed and the minimum suffix length required is automatically determined. ‘ -x ’ ‘ --hex-suffixes[= from ] ’ Like --numeric-suffixes , but use hexadecimal numbers (in lower case). ‘ --additional-suffix= suffix ’ Append an additional suffix to output file names. suffix must not contain slash. ‘ -e ’ ‘ --elide-empty-files ’ Suppress the generation of zero-length output files. This can happen with the --number option if a file is (truncated to be) shorter than the number requested, or if a line is so long as to completely span a chunk. The output file sequence numbers, always run consecutively even when this option is specified. ‘ -t separator ’ ‘ --separator= separator ’ Use character separator as the record separator instead of the default newline character (ASCII LF). To specify ASCII NUL as the separator, use the two-character string ‘ \0 ’, e.g., ‘ split -t '\0' ’. ‘ -u ’ ‘ --unbuffered ’ Immediately copy input to output in --number r/… mode, which is a much slower mode of operation. ‘ --verbose ’ Write a diagnostic just before each output file is opened.

An exit status of zero indicates success, and a nonzero value indicates failure.

Here are a few examples to illustrate how the --number ( -n ) option works:

Notice how, by default, one line may be split onto two or more:

$ seq -w 6 10 > k; split -n3 k; head xa? ==> xaa <== 06 07 ==> xab <== 08 0 ==> xac <== 9 10

Use the "l/" modifier to suppress that:

$ seq -w 6 10 > k; split -nl/3 k; head xa? ==> xaa <== 06 07 ==> xab <== 08 09 ==> xac <== 10

Use the "r/" modifier to distribute lines in a round-robin fashion:

$ seq -w 6 10 > k; split -nr/3 k; head xa? ==> xaa <== 06 09 ==> xab <== 07 10 ==> xac <== 08

You can also extract just the Kth chunk. This extracts and prints just the 7th "chunk" of 33:

$ seq 100 > k; split -nl/7/33 k 20 21 22

5.4 csplit : Split a file into context-determined pieces

csplit creates zero or more output files containing sections of input (standard input if input is ‘ - ’). Synopsis:

csplit [ option ]… input pattern …

The contents of the output files are determined by the pattern arguments, as detailed below. An error occurs if a pattern argument refers to a nonexistent line of the input file (e.g., if no remaining line matches a given regular expression). After every pattern has been matched, any remaining input is copied into one last output file.

By default, csplit prints the number of bytes written to each output file after it has been created.

The types of pattern arguments are:

‘ n ’ Create an output file containing the input up to but not including line n (a positive integer). If followed by a repeat count, also create an output file containing the next n lines of the input file once for each repeat. ‘ / regexp /[ offset ] ’ Create an output file containing the current line up to (but not including) the next line of the input file that contains a match for regexp . The optional offset is an integer. If it is given, the input up to (but not including) the matching line plus or minus offset is put into the output file, and the line after that begins the next section of input. Note lines within a negative offset of a regexp pattern are not matched in subsequent regexp patterns. ‘ % regexp %[ offset ] ’ Like the previous type, except that it does not create an output file, so that section of the input file is effectively ignored. ‘ { repeat-count } ’ Repeat the previous pattern repeat-count additional times. The repeat-count can either be a positive integer or an asterisk, meaning repeat as many times as necessary until the input is exhausted.

The output files’ names consist of a prefix (‘ xx ’ by default) followed by a suffix. By default, the suffix is an ascending sequence of two-digit decimal numbers from ‘ 00 ’ to ‘ 99 ’. In any case, concatenating the output files in sorted order by file name produces the original input file, excluding portions skipped with a % regexp % pattern or the --suppress-matched option.

By default, if csplit encounters an error or receives a hangup, interrupt, quit, or terminate signal, it removes any output files that it has created so far before it exits.

The program accepts the following options. Also see Common options.

‘ -f prefix ’ ‘ --prefix= prefix ’ Use prefix as the output file name prefix. ‘ -b format ’ ‘ --suffix-format= format ’ Use format as the output file name suffix. When this option is specified, the suffix string must include exactly one printf(3) -style conversion specification, possibly including format specification flags, a field width, a precision specifications, or all of these kinds of modifiers. The format letter must convert a binary unsigned integer argument to readable form. The format letters ‘ d ’ and ‘ i ’ are aliases for ‘ u ’, and the ‘ u ’, ‘ o ’, ‘ x ’, and ‘ X ’ conversions are allowed. The entire format is given (with the current output file number) to sprintf(3) to form the file name suffixes for each of the individual output files in turn. If this option is used, the --digits option is ignored. ‘ -n digits ’ ‘ --digits= digits ’ Use output file names containing numbers that are digits digits long instead of the default 2. ‘ -k ’ ‘ --keep-files ’ Do not remove output files when errors are encountered. ‘ --suppress-matched ’ Do not output lines matching the specified pattern . I.e., suppress the boundary line from the start of the second and subsequent splits. ‘ -z ’ ‘ --elide-empty-files ’ Suppress the generation of zero-length output files. (In cases where the section delimiters of the input file are supposed to mark the first lines of each of the sections, the first output file will generally be a zero-length file unless you use this option.) The output file sequence numbers always run consecutively starting from 0, even when this option is specified. ‘ -s ’ ‘ -q ’ ‘ --silent ’ ‘ --quiet ’ Do not print counts of output file sizes.

An exit status of zero indicates success, and a nonzero value indicates failure.

Here is an example of its usage. First, create an empty directory for the exercise, and cd into it:

$ mkdir d && cd d

Now, split the sequence of 1..14 on lines that end with 0 or 5:

$ seq 14 | csplit - '/[05]$/' '{*}' 8 10 15

Each number printed above is the size of an output file that csplit has just created. List the names of those output files:

$ ls xx00 xx01 xx02

Use head to show their contents:

$ head xx* ==> xx00 <== 1 2 3 4 ==> xx01 <== 5 6 7 8 9 ==> xx02 <== 10 11 12 13 14

Example of splitting input by empty lines:

$ csplit --suppress-matched input.txt '/^$/' '{*}'

6 Summarizing files

These commands generate just a few numbers representing entire contents of files.

6.1 wc : Print newline, word, and byte counts

wc counts the number of bytes, characters, whitespace-separated words, and newlines in each given file , or standard input if none are given or for a file of ‘ - ’. Synopsis:

wc [ option ]… [ file ]…

wc prints one line of counts for each file, and if the file was given as an argument, it prints the file name following the counts. If more than one file is given, wc prints a final line containing the cumulative counts, with the file name total . The counts are printed in this order: newlines, words, characters, bytes, maximum line length. Each count is printed right-justified in a field with at least one space between fields so that the numbers and file names normally line up nicely in columns. The width of the count fields varies depending on the inputs, so you should not depend on a particular field width. However, as a GNU extension, if only one count is printed, it is guaranteed to be printed without leading spaces.

By default, wc prints three counts: the newline, words, and byte counts. Options can specify that only certain counts be printed. Options do not undo others previously given, so

wc --bytes --words

prints both the byte counts and the word counts.

With the --max-line-length option, wc prints the length of the longest line per file, and if there is more than one file it prints the maximum (not the sum) of those lengths. The line lengths here are measured in screen columns, according to the current locale and assuming tab positions in every 8th column.

The program accepts the following options. Also see Common options.

‘ -c ’ ‘ --bytes ’ Print only the byte counts. ‘ -m ’ ‘ --chars ’ Print only the character counts. ‘ -w ’ ‘ --words ’ Print only the word counts. ‘ -l ’ ‘ --lines ’ Print only the newline counts. ‘ -L ’ ‘ --max-line-length ’ Print only the maximum display widths. Tabs are set at every 8th column. Display widths of wide characters are considered. Non-printable characters are given 0 width. ‘ --files0-from= file ’ Disallow processing files named on the command line, and instead process those named in file file ; each name being terminated by a zero byte (ASCII NUL). This is useful when the list of file names is so long that it may exceed a command line length limitation. In such cases, running wc via xargs is undesirable because it splits the list into pieces and makes wc print a total for each sublist rather than for the entire list. One way to produce a list of ASCII NUL terminated file names is with GNU find , using its -print0 predicate. If file is ‘ - ’ then the ASCII NUL terminated file names are read from standard input. For example, to find the length of the longest line in any .c or .h file in the current hierarchy, do this: find . -name '*.[ch]' -print0 | wc -L --files0-from=- | tail -n1

An exit status of zero indicates success, and a nonzero value indicates failure.

6.2 sum : Print checksum and block counts

sum computes a 16-bit checksum for each given file , or standard input if none are given or for a file of ‘ - ’. Synopsis:

sum [ option ]… [ file ]…

sum prints the checksum for each file followed by the number of blocks in the file (rounded up). If more than one file is given, file names are also printed (by default). (With the --sysv option, corresponding file names are printed when there is at least one file argument.)

By default, GNU sum computes checksums using an algorithm compatible with BSD sum and prints file sizes in units of 1024-byte blocks.

The program accepts the following options. Also see Common options.

‘ -r ’ Use the default (BSD compatible) algorithm. This option is included for compatibility with the System V sum . Unless -s was also given, it has no effect. ‘ -s ’ ‘ --sysv ’ Compute checksums using an algorithm compatible with System V sum ’s default, and print file sizes in units of 512-byte blocks.

sum is provided for compatibility; the cksum program (see next section) is preferable in new applications.

An exit status of zero indicates success, and a nonzero value indicates failure.

6.3 cksum : Print CRC checksum and byte counts

cksum computes a cyclic redundancy check (CRC) checksum for each given file , or standard input if none are given or for a file of ‘ - ’. Synopsis:

cksum [ option ]… [ file ]…

cksum prints the CRC checksum for each file along with the number of bytes in the file, and the file name unless no arguments were given.

cksum is typically used to ensure that files transferred by unreliable means (e.g., netnews) have not been corrupted, by comparing the cksum output for the received files with the cksum output for the original files (typically given in the distribution).

The CRC algorithm is specified by the POSIX standard. It is not compatible with the BSD or System V sum algorithms (see the previous section); it is more robust.

The only options are --help and --version . See Common options.

An exit status of zero indicates success, and a nonzero value indicates failure.

6.4 b2sum : Print or check BLAKE2 digests

b2sum computes a 512-bit checksum for each specified file . The same usage and options as the md5sum command are supported. See md5sum invocation. In addition b2sum supports the following options.

‘ -l ’ ‘ --length ’ Change (shorten) the default digest length. This is specified in bits and thus must be a multiple of 8. This option is ignored when --check is specified, as the length is automatically determined when checking.

6.5 md5sum : Print or check MD5 digests

md5sum computes a 128-bit checksum (or fingerprint or message-digest) for each specified file .

Note: The MD5 digest is more reliable than a simple CRC (provided by the cksum command) for detecting accidental file corruption, as the chances of accidentally having two files with identical MD5 are vanishingly small. However, it should not be considered secure against malicious tampering: although finding a file with a given MD5 fingerprint is considered infeasible at the moment, it is known how to modify certain files, including digital certificates, so that they appear valid when signed with an MD5 digest. For more secure hashes, consider using SHA-2, or the newer b2sum command. See sha2 utilities. See b2sum invocation.

If a file is specified as ‘ - ’ or if no files are given md5sum computes the checksum for the standard input. md5sum can also determine whether a file and checksum are consistent. Synopsis:

md5sum [ option ]… [ file ]…

For each file , ‘ md5sum ’ outputs by default, the MD5 checksum, a space, a flag indicating binary or text input mode, and the file name. Binary mode is indicated with ‘ * ’, text mode with ‘ ’ (space). Binary mode is the default on systems where it’s significant, otherwise text mode is the default. Without --zero , if file contains a backslash or newline, the line is started with a backslash, and each problematic character in the file name is escaped with a backslash, making the output unambiguous even in the presence of arbitrary file names. If file is omitted or specified as ‘ - ’, standard input is read.

The program accepts the following options. Also see Common options.

‘ -b ’ ‘ --binary ’ Treat each input file as binary, by reading it in binary mode and outputting a ‘ * ’ flag. This is the inverse of --text . On systems like GNU that do not distinguish between binary and text files, this option merely flags each input mode as binary: the MD5 checksum is unaffected. This option is the default on systems like MS-DOS that distinguish between binary and text files, except for reading standard input when standard input is a terminal. ‘ -c ’ ‘ --check ’ Read file names and checksum information (not data) from each file (or from stdin if no file was specified) and report whether the checksums match the contents of the named files. The input to this mode of md5sum is usually the output of a prior, checksum-generating run of ‘ md5sum ’. Three input formats are supported. Either the default output format described above, the --tag output format, or the BSD reversed mode format which is similar to the default mode, but doesn’t use a character to distinguish binary and text modes. Output with --zero enabled is not supported by --check .

For each such line, md5sum reads the named file and computes its MD5 checksum. Then, if the computed message digest does not match the one on the line with the file name, the file is noted as having failed the test. Otherwise, the file passes the test. By default, for each valid line, one line is written to standard output indicating whether the named file passed the test. After all checks have been performed, if there were any failures, a warning is issued to standard error. Use the --status option to inhibit that output. If any listed file cannot be opened or read, if any valid line has an MD5 checksum inconsistent with the associated file, or if no valid line is found, md5sum exits with nonzero status. Otherwise, it exits successfully. ‘ --ignore-missing ’ This option is useful only when verifying checksums. When verifying checksums, don’t fail or report any status for missing files. This is useful when verifying a subset of downloaded files given a larger list of checksums. ‘ --quiet ’ This option is useful only when verifying checksums. When verifying checksums, don’t generate an ’OK’ message per successfully checked file. Files that fail the verification are reported in the default one-line-per-file format. If there is any checksum mismatch, print a warning summarizing the failures to standard error. ‘ --status ’ This option is useful only when verifying checksums. When verifying checksums, don’t generate the default one-line-per-file diagnostic and don’t output the warning summarizing any failures. Failures to open or read a file still evoke individual diagnostics to standard error. If all listed files are readable and are consistent with the associated MD5 checksums, exit successfully. Otherwise exit with a status code indicating there was a failure. ‘ --tag ’ Output BSD style checksums, which indicate the checksum algorithm used. As a GNU extension, if --zero is not used, file names with problematic characters are escaped as described above, with the same escaping indicator of ‘ \ ’ at the start of the line, being used. The --tag option implies binary mode, and is disallowed with --text mode as supporting that would unnecessarily complicate the output format, while providing little benefit. ‘ -t ’ ‘ --text ’ Treat each input file as text, by reading it in text mode and outputting a ‘ ’ flag. This is the inverse of --binary . This option is the default on systems like GNU that do not distinguish between binary and text files. On other systems, it is the default for reading standard input when standard input is a terminal. This mode is never defaulted to if --tag is used. ‘ -w ’ ‘ --warn ’ When verifying checksums, warn about improperly formatted MD5 checksum lines. This option is useful only if all but a few lines in the checked input are valid. ‘ --strict ’ When verifying checksums, if one or more input line is invalid, exit nonzero after all warnings have been issued. ‘ -z ’ ‘ --zero ’ Output a zero byte (ASCII NUL) at the end of each line, rather than a newline. This option enables other programs to parse the output even when that output would contain data with embedded newlines. Also file name escaping is not used.

An exit status of zero indicates success, and a nonzero value indicates failure.

6.6 sha1sum : Print or check SHA-1 digests

sha1sum computes a 160-bit checksum for each specified file . The usage and options of this command are precisely the same as for md5sum . See md5sum invocation.

Note: The SHA-1 digest is more reliable than a simple CRC (provided by the cksum command) for detecting accidental file corruption, as the chances of accidentally having two files with identical SHA-1 are vanishingly small. However, it should not be considered secure against malicious tampering: although finding a file with a given SHA-1 fingerprint is considered infeasible at the moment, it is known how to modify certain files, including digital certificates, so that they appear valid when signed with an SHA-1 digest. For more secure hashes, consider using SHA-2, or the newer b2sum command. See sha2 utilities. See b2sum invocation.

6.7 sha2 utilities: Print or check SHA-2 digests

The commands sha224sum , sha256sum , sha384sum and sha512sum compute checksums of various lengths (respectively 224, 256, 384 and 512 bits), collectively known as the SHA-2 hashes. The usage and options of these commands are precisely the same as for md5sum and sha1sum . See md5sum invocation.

7 Operating on sorted files

These commands work with (or produce) sorted files.

7.1 sort : Sort text files

sort sorts, merges, or compares all the lines from the given files, or standard input if none are given or for a file of ‘ - ’. By default, sort writes the results to standard output. Synopsis:

sort [ option ]… [ file ]…

Many options affect how sort compares lines; if the results are unexpected, try the --debug option to see what happened. A pair of lines is compared as follows: sort compares each pair of fields (see --key ), in the order specified on the command line, according to the associated ordering options, until a difference is found or no fields are left. If no key fields are specified, sort uses a default key of the entire line. Finally, as a last resort when all keys compare equal, sort compares entire lines as if no ordering options other than --reverse ( -r ) were specified. The --stable ( -s ) option disables this last-resort comparison so that lines in which all fields compare equal are left in their original relative order. The --unique ( -u ) option also disables the last-resort comparison.

Unless otherwise specified, all comparisons use the character collating sequence specified by the LC_COLLATE locale.2 A line’s trailing newline is not part of the line for comparison purposes. If the final byte of an input file is not a newline, GNU sort silently supplies one. GNU sort (as specified for all GNU utilities) has no limit on input line length or restrictions on bytes allowed within lines.

sort has three modes of operation: sort (the default), merge, and check for sortedness. The following options change the operation mode:

‘ -c ’ ‘ --check ’ ‘ --check=diagnose-first ’ Check whether the given file is already sorted: if it is not all sorted, print a diagnostic containing the first out-of-order line and exit with a status of 1. Otherwise, exit successfully. At most one input file can be given. ‘ -C ’ ‘ --check=quiet ’ ‘ --check=silent ’ Exit successfully if the given file is already sorted, and exit with status 1 otherwise. At most one input file can be given. This is like -c , except it does not print a diagnostic. ‘ -m ’ ‘ --merge ’ Merge the given files by sorting them as a group. Each input file must always be individually sorted. It always works to sort instead of merge; merging is provided because it is faster, in the case where it works.

Exit status:

0 if no error occurred 1 if invoked with -c or -C and the input is not sorted 2 if an error occurred

If the environment variable TMPDIR is set, sort uses its value as the directory for temporary files instead of /tmp . The --temporary-directory ( -T ) option in turn overrides the environment variable.

The following options affect the ordering of output lines. They may be specified globally or as part of a specific key field. If no key fields are specified, global options apply to comparison of entire lines; otherwise the global options are inherited by key fields that do not specify any special options of their own. In pre-POSIX versions of sort , global options affect only later key fields, so portable shell scripts should specify global options first.

‘ -b ’ ‘ --ignore-leading-blanks ’ Ignore leading blanks when finding sort keys in each line. By default a blank is a space or a tab, but the LC_CTYPE locale can change this. Note blanks may be ignored by your locale’s collating rules, but without this option they will be significant for character positions specified in keys with the -k option. ‘ -d ’ ‘ --dictionary-order ’ Sort in phone directory order: ignore all characters except letters, digits and blanks when sorting. By default letters and digits are those of ASCII and a blank is a space or a tab, but the LC_CTYPE locale can change this. ‘ -f ’ ‘ --ignore-case ’ Fold lowercase characters into the equivalent uppercase characters when comparing so that, for example, ‘ b ’ and ‘ B ’ sort as equal. The LC_CTYPE locale determines character types. When used with --unique those lower case equivalent lines are thrown away. (There is currently no way to throw away the upper case equivalent instead. (Any --reverse given would only affect the final result, after the throwing away.)) ‘ -g ’ ‘ --general-numeric-sort ’ ‘ --sort=general-numeric ’ Sort numerically, 