jots on linux/UNIX system administration, bash and perl -Tom Rodman

GNU/copyleft.org

Bradley Kuhn (GPL license expert/enforcer) and lawyer Karen Sandler have a podcast that covers the copyleft licenses . Their podcast has been running years now, is called Free as in Freedom, and is hosted at http://faif.us/.

Why GNU matters, GNU history.

overview history of GNU The GNU Project GNU people

a few GNU licensed projects https://www.gnu.org/encyclopedia/encyclopedia.html: Nupedia was licensed initially under its own Nupedia Open Content License, switching to the GNU Free Documentation License before Wikipedia’s founding at the urging of Richard Stallman. http://en.wikipedia.org/wiki/History_of_Linux: 1st Linux: In 1992, he suggested releasing the kernel under the GNU General Public License. He first announced this decision in the release notes of version 0.12.[15] In the middle of December 1992 he published version 0.99 using the GNU GPL



License overview/summary:

GNU SCM repo hosts for FLOSS projects

Savannah

very new: Kallithea Software Freedom Conservancy is pleased to announce today its newest member project, Kallithea. Kallithea is a system for hosting and managing Mercurial and Git repositories. In contrast to GitHub (which serves only projects using Git and which projects cannot host locally nor modify), Kallithea supports both Mercurial and Git, and is released freely under the GNU General Public License, version 3 (GPLv3). http://sfconservancy.org/blog/2014/jul/15/why-kallithea/



The GNU date command (part of coreutils package) has a wide range of options, including relative offset strings like tomorrow, yesterday, “2 weeks ago”.. It supports some date math, and time zone conversions.

~ $ date Tue, Sep 28, 2010 3:32:17 PM ~ $ date --date '2 days ago' Sun, Sep 26, 2010 3:32:28 PM ~ $ date -d '6:00pm 2 days ago' Sun, Sep 26, 2010 6:00:00 PM ~ $ date --date '11am yesterday' Mon, Sep 27, 2010 11:00:00 AM ~ $ date --date '6pm tomorrow' Wed, Sep 29, 2010 6:00:00 PM ~ $ date --date "$(date --date 'next month' '+%m/1/%Y') -1 day" Thu, Sep 30, 2010 12:00:00 AM ~ $ : above is last day in month ~ $ date --date 'now +10 days' Fri, Oct 08, 2010 3:33:14 PM ~ $ date -d "1am +3 weeks" '+%H:%M %D' 01:00 10/19/10 ~ $ date --date 'Jan 10 00:00 -0600 - 1 hour - 50 min' Sat, Jan 09, 2010 10:10:00 PM ~ $ date --date "4:59:54 1 hour ago 53 min ago 46 sec ago" Tue, Sep 28, 2010 3:06:08 AM ~ $ date --date 'Dec 25' Sat, Dec 25, 2010 12:00:00 AM ~ $ date --date 'Jan 9 11pm + 1 hour' Sun, Jan 10, 2010 12:00:00 AM --snip ~ $ date Fri, Nov 19, 2010 10:18:49 AM ~ $ date --date "last sunday" Sun, Nov 14, 2010 12:00:00 AM ~ $ date --date "next tue" Tue, Nov 23, 2010 12:00:00 AM --snip/daylight savings $ date --date "3/14/2010 1:59am + 2 min" Sun, Mar 14, 2010 3:01:00 AM $ date --date "3/14/2010 1:59am + 1 min" Sun, Mar 14, 2010 3:00:00 AM $ date --date "3/15/2010 1:59am + 1 min" Mon, Mar 15, 2010 2:00:00 AM

time zone conversions, epoch sec

~ $ TZ=Asia/Calcutta date --date '7pm fri CDT' Sat, Oct 02, 2010 5:30:00 AM ~ $ TZ=Europe/Berlin date -d "1970-01-01 UTC $(TZ=America/Chicago date --date "6:15am" '+%s') sec" Tue, Sep 28, 2010 1:15:00 PM ~ $ date -d '1970-01-01 UTC 0 sec' Wed, Dec 31, 1969 6:00:00 PM ~ $ TZ=America/Chicago date -d '1970-01-01 UTC 0 sec' Wed, Dec 31, 1969 6:00:00 PM ~ $ TZ=America/New_York date -d '1970-01-01 UTC 0 sec' Wed, Dec 31, 1969 7:00:00 PM

Many more formats are available, than shown here.

~ $ date +%-m/%-d/%Y 9/28/2010 ~ $ date '+%F_%H%M%S' 2010-09-28_154837 ~ $ date '+%a %F %T.%N' Tue 2010-09-28 15:49:31.362339100 ~ $ date --date='25 Dec' +%j 359

For learning or reviewing complex tools, that take months to master, an approach I use is: to gather all the related help into a single vim edit session. For example, consider the tool ‘gpg’. Here’s the commandline I use to concatenate, the texinfo files, man pages, and selected help-webpages:

true; ( set -x;: {{{;gpg2 --help; : }}}; : {{{;_vwg http://www.gnupg.org/gph/en/manual.html;: }}}; : {{{;_vwg http://www.dewinter.com/gnupg_howto/english/GPGMiniHowto.txt; : }}}; : {{{; zcat /usr/share/info/{gnupg.info*gz,pinentry.info.gz}; : }}};: {{{; _m gpg2 ;: }}};: {{{; _m gpg-agent ; : }}} ) 2>&1 | 2v -i my-GPG-help

‘true’ is there only for ease of mouse-selecting the text, for copy/pasting.

‘set -x’ let’s you see which commands ran. {{{ and : }}} introduce vim folds, which place each help topic in a separate fold or block. In vim type “:help fold”.

_vwg is a tool from uqjau which uses wget and pandoc to convert a webpage to markdown.

_m is 4 line bash function that runs “man”$@“|col -bx”; thus converting a man page to ASCII.

_2v (“to vim”) is a personal bash function/filter that creates temp file with all the output content. It also creates a 1 line vim command in a file in a fixed location, that I source from within vim. So within in vim I can import the content using a 2 keystroke custom vim “leader command”. In vim, type “:help leader”.

ex 1 line command created by _2v:

e /var/home/rodmant/.vim/tmp/2v.STDIN.my-GPG-help.SunJan04.0512.548

ex snip of output:

$ head /var/home/rodmant/.vim/tmp/2v.STDIN.my-GPG-help.SunJan04.0512.548 + : '{{{' + gpg2 --help gpg (GnuPG) 2.0.10 libgcrypt 1.4.4 Copyright (C) 2009 Free Software Foundation, Inc. --snip

Function “Bc” below, starts a ‘bc’ session, echoes commands to bc STDIN initially to set it’s scale, and define a function, and then uses ‘cat’ to connect the starting shell’s STDIN with bc, so you can interact w/bc (w/the keyboard for example).

Bc() { : -------------------------------------------------------------------- : Synopsis: Wrapper for 'bc'. Defines an exponential function : 'p (a,b) { return (e ( l (a) * b )) }' : -------------------------------------------------------------------- { echo 'define p (a,b) { return (e ( l (a) * b )) }' echo scale=3 cat }| bc -lq }

I like to minimize the number of shells I have open, so when a command takes for than 5 seconds, I background it; there are several approaches.

In the general case consider foo to be a built in command, or external command. Where noted ‘foo’ could represent a complex bash command, as in

for x in a b c; do true|false|true; done

The simplist way to background is:

foo&

This does not always work smoothly. In some shells foo will suspend itself if it generates STDOUT.

If you have permissions to run ‘at’, you can:

echo foo|batch # or echo foo | at now +45 min at 8am Sun <<\END foo -xyz for x in a b c; do true|false|true; done END

setsid will run the job in a separate process group from your current shell.

setsid foo # or: setsid bash <<\END { du /var date } > /tmp/var-df 2>&1 END

The job will run in the bg, with no tty (no terminal), and no association with your shell session (it will not show up in ‘jobs’ output). With setsid, logging out of you shell session should never impact the job.

I have a script called ’_bg’ in uqjau, which is a wrapper for setsid.

$ head -23 $_C/_bg #!/usr/bin/env bash # -------------------------------------------------------------------- # Synopsis: Run simple command in background in separate # process session. Will not be seen by your shell as a job. Log # STDOUT and STDERR to file. Simple command => exactly 1 command # and it's args. # -------------------------------------------------------------------- # Usage: # ourname SIMPLE-COMMAND_HERE # ourname - # ourname # # (in last 2 cases above) => shell script to run is from STDIN # # (complex shell commands OK) # -------------------------------------------------------------------- # Options: # -l run in bash login shell w/ -i # -e set pathname env vars per _29r_dirdefs output # -o LOGPATHNAME # -n JOBNAME becomes part of log name # # -W run nothing, but show recent logs # --------------------------------------------------------------------

I seldom use ’_bg’. The simple workaround I use all day is in ~/.inputrc:

"\C-xB": "\C-a(: set -x;: pwd; \C-e) < /dev/null 2>&1|ff &\C-b" # (works for both simple and complex commands) # For help on ~/.inputrc, see 'man bash' (Readline Initialization).

When I type:

foo\C-xB # foo can be a complex bash commandline, with pipes, switches etc # result is: (: set -x;: pwd; foo) < /dev/null 2>&1|ff & # Remove the leading colons above for verbose runs.

By redirecting foo’s STDIN to /dev/null, you prevent it from trying to access your tty. foo STDOUT and STDERR are piped to ‘ff’ which will log the job to a new tempfile; when foo completes ‘ff’ will beep, and rudely display the log pathname. If you use ‘ff -i baz’, then ‘baz’ will be part of the logfile name. ff is part of uqjau.

When one of my cron jobs fails, the wrapper script that had launched and logged it, places an appropriately named symbolic link to the log file, into a normally empty directory. Another cron job watches that dir and emails when a link exists, alerting one to the failed job, and positioning you to see the detailed log.

The wrapper script is called ‘jobmon’, and is part of uqjau. jobmon has a fair number of options. For example supports you passing in via it’s args, another meta quoted shell commandline for the script you want to run.

‘/usr/sbin/tmpwatch’ is typically run in cron to cleanup /tmp. Here is a man snip:

If the --atime, --ctime or --mtime options are used in combination, the decision about deleting a file will be based on the maximum of these times. The --dirmtime option implies ignoring atime of directories, even if the --atime option is used. -u, --atime Make the decision about deleting a file based on the file's atime (access time). This is the default. Note that the periodic updatedb file system scans keep the atime of directories recent. -m, --mtime Make the decision about deleting a file based on the file's mtime (modification time) instead of the atime. -c, --ctime Make the decision about deleting a file based on the file's ctime (inode change time) instead of the atime; for directories, make the decision based on the mtime.

The last two args for tmpwatch are always: <hours> <dirs>; unfortunately the -u, -m, and -c all refer to the single argument <hours>.

In my personal (non root) crontab, I run a modified copy of the shell script /etc/cron.daily/tmpwatch:

$ egrep 'flags=|days=|/usr/sbin/tmpwatch' ~/bin/tmpwatch #flags=-umc flags=${tmpwatch_flags:--cm} days=${tmpwatch_days:-5} /usr/sbin/tmpwatch --verbose "$flags" $[24 * $days] "${@:-${HOME}/tmp}"

I suggest you study the timestamps in your tmp dirs to see if atimes or ctimes are being freshened by other processes; only after that should you finalize your tmpwatch <hours> argument and -u, -m, and -c switches.

Here I run my bash function ‘_tmpf_timestamps’ to look at timestamps below ~/tmp:

$ _tmpf_timestamps -c 10 ~/tmp Total non dirs: [114] in [/var/home/rodmant/tmp] Dirs: [127] Empty Dirs: [7] count of non dirs w/[mca] timestamp-age older than 'col 1'-days : i: 0 m: 114 c: 114 a: 114 i: 1 m: 64 c: 46 a: 45 i: 2 m: 54 c: 36 a: 36 i: 3 m: 51 c: 33 a: 33 i: 4 m: 40 c: 22 a: 22 i: 5 m: 39 c: 21 a: 21 i: 6 m: 39 c: 21 a: 21 i: 7 m: 39 c: 21 a: 21 i: 8 m: 39 c: 21 a: 21 i: 9 m: 39 c: 21 a: 21 i: 10 m: 39 c: 21 a: 21

My theory is that tmpwatch does not cleanup sockets or named pipes (the 21 items above).

$ ls -lct $(find . ! -type d -ctime +5) |head -2 srwxr-xr-x 1 jdoe crew 0 Oct 21 07:41 ./sock= prw-rw-rw- 1 jdoe crew 0 Feb 13 2014 ./_untartmp.dl.Irli3917/home/jdoe/s2f| $ file ./_untartmp.dl.Irli3917/home/torodman/s2f ./_untartmp.dl.Irli3917/home/torodman/s2f: fifo (named pipe)

Theory now confirmed. See another man snip: -a, –all Remove all file types, not just regular files, symbolic links and directories.

Here is my bash function ‘_tmpf_timestamps’:

/usr/local/etc/team/mke/iBASHrc $ _bashfunccodegrep _tmpf_timestamps < ./functions _tmpf_timestamps() { : -------------------------------------------------------------------- : Synopsis: Analyze timestamps of either tmpfiles or empty dirs. An : aid in debugging the behavior of tmpwatch script. : -------------------------------------------------------------------- : Usage: $ourname [-d] DIRPATHNAME : ' -d Look only at empty dirs instead of files.' local opt_true=1 opt_char badOpt= OPTIND=1 # OPTIND=1 for 2nd and subsequent getopt invocations; 1 at shell start local OPT_d= OPT_c= while getopts dc: opt_char do # save info in an "OPT_*" env var. [[ $opt_char != \? ]] && eval OPT_${opt_char}="\"\${OPTARG:-$opt_true}\"" || badOpt=1 done shift $(( $OPTIND -1 )) # If badOpt: If in function return 1, else exit 1: [[ -z $badOpt ]] || { : help; return 1 &>/dev/null || exit 1; } #unset opt_true opt_char badOpt ( [[ $OPT_d == -d ]] && action="-type d -empty" || action="-type f" tdir=${1:-/tmp} [[ -d $tdir ]] || { echo $FUNCNAME:\[$tdir] not a dir; return 1; } tdir=$(cd "$tdir";pwd -P) # make tdir "find friendly" emptydirs=$(find $tdir -type d -empty 2>/dev/null|wc -l) echo Total files: \[$(find $tdir -type f 2>/dev/null |wc -l)] in \[$tdir] \ " "Dirs: \[$(find $tdir -type d 2>/dev/null|wc -l)] \ " "Empty Dirs: \[$emptydirs] if [[ $emptydirs = 0 && $action =~ -type\ d\ -empty ]] ;then return 1 fi echo echo "count of files w/[mca] timestamp-age less than 'col 1'-days :" echo for (( i=1; $i <= ${OPT_c:-15} ;i += 1));do m=$(find $tdir $action -mtime -$i 2>/dev/null|wc -l) c=$(find $tdir $action -ctime -$i 2>/dev/null|wc -l) a=$(find $tdir $action -atime -$i 2>/dev/null|wc -l) printf "i:%3d m:%6d c:%6d a:%6d

" $i $m $c $a done |sed -e 's~^~ ~' ) }

–

construct similiar to ‘eval’

$ cmd='set -- a s d ;for f in "$@";do echo $f;done' $ source <( echo "$cmd" ) ## Only works in bash 4.x a s d

Below is a bash function ’_diskfull’ used to help identify large files to manually delete. The bash function _bashfunccodegrep is used to display ’_diskfull’ from the file “functions”:

/usr/local/etc/team/mke/iBASHrc $ _bashfunccodegrep _diskfull < functions _diskfull() { : _func_ok2unset_ team function : Size-sorted output of: cd ARG1 ... du -xSma : Safe to run on /, because of -x switch to du, stays in / fs -- this has been tested. : -S == do not include size of subdirectories : Advantages of -S: : .. dirs w/small files only at their top level get low sort rank, top level as in "GNU find's depth 1" : .. fewer size sum calculations ( set -eu local fs="${1:-$PWD}" fs_bn="$(basename "$(canPath "$fs")")" : canPath "$fs", could be replaced with: readlink -f "$fs" if [[ $fs_bn == / ]] ;then fs_bn=ROOT fi local tmpdir=${TMPDIR:-~/tmp} [[ -d $tmpdir ]] || tmpdir=/tmp local out="$( mktemp $tmpdir/$FUNCNAME.$fs_bn.$(hostnameshort).XXXXX)" du_stderr=$(mktemp $tmpdir/$FUNCNAME.du_stderr.XXXXX) sort_stderr=$(mktemp $tmpdir/$FUNCNAME.sort_stderr.XXXXX) cd "$fs" echo $FUNCNAME: writing to $out ( set -x : CWD: $PWD writing to $out nice du -xSma 2>$du_stderr|nice sort -T $tmpdir -k1,1rn 2>$sort_stderr : cat $du_stderr cat $sort_stderr ) > $out 2>&1 ) rm -f $du_stderr $sort_stderr }

I run cron scheduled backups to rsync.net, and tape backups - to either DDS4 or LTO tapes.

GNU tar supports tar backup to a tape drive on a remote host.

From GNU tar texinfo help:

`--rsh-command=CMD' Notifies `tar' that is should use CMD to communicate with remote devices.

For example:

tar –rsh-command=/usr/bin/ssh …

The code below is available in uqjau.

I put together wrapper functions for tar, and mt in a file to be sourced by bash ( uqjau file: “_tape_utils.shinc” ):

$ _bashfuncgrep _tar < ./_tape_utils.shinc _tar() { # -------------------------------------------------------------------- # Synopsis: GNU tar wrapper to support remote tape drive # -------------------------------------------------------------------- (set -x;sleep 5;time tar ${_use_ssh+--rsh-command=$_use_ssh} "$@") # _use_ssh if defined is path to ssh, typically /usr/bin/ssh }

The script I use for backing up a linux host to (remote or local) tape w/tar is called “_backupall”, and is also part of uqjau. The bash function ‘_bashfuncgrep’ is in iBASHrc.

( applies to GNU: ln, mv, and cp )

Ex of

snafu:

~ jdoe $ ls -ldog * lrwxrwxrwx 1 2 Mar 20 20:21 latest -> d3/ lrwxrwxrwx 1 2 Mar 20 20:20 prev -> d1/ ~ jdoe $ ln -sf d2 prev # WRONG ~ jdoe $ ls -ldog * lrwxrwxrwx 1 2 Mar 20 20:21 latest -> d3/ lrwxrwxrwx 1 2 Mar 20 20:20 prev -> d1/ ~ jdoe $ ls -ld d1/* lrwxrwxrwx 1 2 Mar 20 20:23 d1/d2 -> d2

solution:

~ jdoe $ ln -Tsf d2 prev # RIGHT ~ jdoe $ ls -ldog * --snip lrwxrwxrwx 1 2 Mar 20 20:23 prev -> d2/

Ex: rename existing symbolic link and redefining another existing symbolic link:

mv -Tf saz yap # -T, --no-target-directory == treat DEST as a normal file # With out the -T, if yap had been a symbolic link to a dir, then # the symbolic link 'saz' would have ended up under that dir.

_cg() { : Regex grep of: all commands in PATH, and bash: aliases, built-ins, keywords, and functions. : Usage: $FUNCNAME [REGEX] : --http://stackoverflow.com/questions/948008/linux-command-to-list-all-available-commands-and-aliases : compgen -c will list all the commands you could run. : compgen -a will list all the aliases you could run. : compgen -b will list all the built-ins you could run. : compgen -k will list all the keywords you could run. : compgen -A function will list all the functions you could run. : compgen -A function -abck will list all the above in one go. local filter if [[ $# == 1 ]];then filter="| egrep -i '$1'" fi (set -x; eval "compgen -A function -abck ${filter:-}") }

’_cg’. is part of iBASHrc.

Output is not sorted. Example listing all commands, snipped by sed:

$ _cg 2>&1 |sed -ne 2115,2120p pax eu-readelf nano fusermount gitk xxd

Example grep for “pk.*er“:

$ _cg 'pk.*er' + PATH+=:/usr/local/7Rq/scommands/cur + eval 'compgen -A function -abck | egrep -i '\''pk.*er'\''' ++ compgen -A function -abck ++ egrep -i 'pk.*er' pklogin_finder pkinit-show-cert-guid

I try to stay in a single vim session, typically open for weeks, so the number of buffers can get out of control. Here are a couple of simple housekeeping custom .vimrc commands, that I use all day long:

command Kb :b#|bdel# command KB :b#|bw!#

where ‘b#’ switches to previous buffer, then bdel# deletes the buffer you were in when you ran this ‘Kb’ command.

Just created. Tips for improving gratefully accepted. Thx to ‘zapper’ for regex.

function Mfind(...) let searchStg="" let i = 0 for stg in a:000 let searchStg .= i == 0 ? ".*" . stg : "\\&.*" . stg let i += 1 endfor exe "g;" . searchStg endfunction # ex :call Mfind("red","blue","white")

Run this command as root:

dd < /dev/sda > /dev/null

reads all blocks on the entire ‘sda’ device (ie the first hard drive). Only read errors are displayed – you should have none. Be very careful when ever /dev/sda shows up on the root commandline!

A crude test, but very simple.

Related:

http://www.techrepublic.com/blog/linux-and-open-source/using-smartctl-to-get-smart-status-information-on-your-hard-drives/

For unmounted drive partitions: man badblocks man e2fsck man dumpe2fs

drive partitions:

‘cd_’ is a simple bash function to create, manage, and use a directory of symbolic links that point to your favorite directories. I create a wrapper function with a shorter name to call ‘cd_’. ‘cd_’ is part of iBASHrc.

ex. using my directory shortcut 'zz' ~ $ c zz # Where 'c' is alias for 'cd_'. /usr/local/7Rq/package/cur/sys-2012.03.25/shar/lib $ cd_() { : team function _func_ok2unset_ manages directory shortcuts : -------------------------------------------------------------------- : Synopsis: cd using favorite single word nicknames, or manage : related symbolic links : -------------------------------------------------------------------- : $FUNCNAME , "(no args) to list all shortcuts" : $FUNCNAME -a SHORTCUTBASENAME, add sym link for \$PWD : $FUNCNAME -a REALPATH SHORTCUTBASENAME, add sym link for REALPATH : $FUNCNAME -d SHORTCUTBASENAME , delete : $FUNCNAME -h , show recently created favorites local dirs=~/dirs mkdir -p ~/dirs local hist=$dirs/hist local opt_true=1 OPTIND=1 local OPT_l= OPT_d= OPT_a= OPT_h while getopts lad:h opt_char do # save info in an "OPT_*" env var. test "$opt_char" != \? && eval OPT_${opt_char}="\"\${OPTARG:-$opt_true}\"" || return 1 done shift $(( $OPTIND -1 )) unset opt_true opt_char if [[ -z $OPT_l && -z $OPT_d && -z $OPT_a && $# = 1 ]];then if [[ -L $dirs/$1 ]] ;then cd "$dirs/$1" return 0 elif [[ -f $dirs/$1 ]];then # $1 is a script that echos the dest dir. cd "$(source "$dirs/$1")" else echo "$FUNCNAME: [$1] not a shortcut" >&2 return 1 fi elif [[ -n $OPT_a ]];then if [[ $# == 2 ]];then (set -x;ln -Tsf "$1" "$dirs/$2") 2>&1 |tee -a $hist return ${PIPESTATUS[0]} elif [[ $# == 1 ]];then (set -x;ln -Tsf "$PWD" "$dirs/$1" ) 2>&1 |tee -a $hist return ${PIPESTATUS[0]} else echo "$FUNCNAME:oops:[$*]" >&2 return 64 fi elif [[ -n $OPT_d ]];then (set -x;rm -f "$dirs/$OPT_d") return 0 elif [[ $OPT_l ]];then ls -ld $dirs/{*,.[^.]*} return 0 elif [[ -n $OPT_h ]];then ( set -x;tail -4 $hist ) return 0 elif [[ $# = 0 ]];then ( set -x;cd "$dirs";ls -ld * ) 2>&1 |less return 0 else echo $FUNCNAME:internal error >&2 return 1 fi }

The environment for cron jobs is minimal.

This is close to the env that cron jobs see:

$ env -i USER=$USER HOME=~ PATH=/usr/bin:/bin /bin/bash -c set BASH=/bin/bash BASH_ARGC=() BASH_ARGV=() BASH_EXECUTION_STRING=set BASH_LINENO=() BASH_SOURCE=() BASH_VERSINFO=([0]="3" [1]="2" [2]="25" [3]="1" [4]="release" [5]="i386-redhat-linux-gnu") BASH_VERSION='3.2.25(1)-release' DIRSTACK=() EUID=--snip GROUPS=() HOME=--snip HOSTNAME=--snip HOSTTYPE=i386 IFS=$' \t

' MACHTYPE=i386-redhat-linux-gnu OPTERR=1 OPTIND=1 OSTYPE=linux-gnu PATH=/usr/bin:/bin PPID=3237 PS4='+ ' PWD=/var/home/rodmant/tmp SHELL=/bin/bash SHELLOPTS=braceexpand:hashall:interactive-comments SHLVL=1 TERM=dumb --snipped USER and UID _=/bin/bash

This one liner is a example of running a script w/args to see if it will run in a sparse env, like a cron job:

$ env -i USER=$USER HOME=~ PATH=/usr/bin:/bin /bin/bash -c "$_C/argsshow a 'b c'" _01:a$ _02:b c$

Swap in your script and it’s args into the double quotes above.

A bash function I wrote for ~/.bash_profile to de-dup $PATH. It requires a bash associative array, so it works only in 4.x bash or later.

_deDupPATH() { local path=$1 if [[ ${BASH_VERSION%%.*} < 4 ]];then : Requires at least bash 4.x. echo "$path" return 0 fi local oIFS="$IFS" local p nPATH declare -A seen local started="" IFS=: for p in $path;do if [[ -n $started ]];then if [[ -n ${seen["$p"]:-} ]];then continue else nPATH+=:"$p" fi else started=1 nPATH="$p" fi seen["$p"]=1 done IFS="$oIFS" unset seen echo "$nPATH" } # ex $ _deDupPATH a:a:z a:z

New scratch files are created below ~/tmp/_ff/. A symbolic link ~/tmp/ff.txt is made pointing to the current scratchfile. Old scratch files are not deleted (let cron do that ). I also have vim functions to call ‘ff’ for reading and writing.

$ ff --help ff: Convenience cut and paste tool. Type, edit, pipe to an auto created, unique scratchfile. date|ff date > $scratchfile # ( new $scratchfile ), pathname of $scratchfile shown on STDERR seq 5|ff -t seq 5|tee $scratchfile # ( new $scratchfile ) ff -c cat $scratchfile ff -w show pathname of current $scratchfile ff -C COMMENT prepend COMMENT to $scratchfile basename ff -l less $scratchfile ff -n edit new $scratchfile ff -nE new $scratchfile, echo pathname ff -P windows print (cygwin only) ff ~/mystuff cp ~/mystuff $scratchfile # ( new $scratchfile ) ff -e ed $scratchfile ff -5 tail -5 $scratchfile ff +5 head -5 $scratchfile ff -gc clipboard to new $scratchfile (cygwin only) ff -pc copy $scratchfile to clipboard (cygwin only) ff -R -- REMOPTS REMARGS REMOPTS and ARGS are sent to a remote instance of ff ff -h HOST -- REMOPTS REMARGS ff -r use readline; read 1 line from STDIN, write new $scratchfile ff cat > $scratchfile # reads STDIN from terminal ( new $scratchfile ) ff >foo cat $scratchfile > foo

This bash function is part of uqjau.

‘rm -rf foo’ fails below, due to ‘chmod a-x foo/’ :

$ uname -ro; rpm -qf /bin/rm 2.6.18-348.6.1.el5 GNU/Linux coreutils-5.97-34.el5_8.1 $ id -u;mkdir foo;chmod a-x foo/;ls -logd foo 4187 drw-r--r-- 2 4096 Oct 17 07:46 foo/ $ rm -rf foo; echo $? rm: cannot chdir from `.' to `foo': Permission denied 1

Pretty sure this is intended behaviour. Last time I was able to check Solaris did not have this “feature”.

Assume you have a corrupt or faulty ~/.bash_profile, which prevents you from logging in. This should position you to login and edit it:

ssh -t localhost bash --norc -i johndoe@foobar.com # -t forces a tty, --norc else source ~/.bashrc, -i for interactive

$ echo hi | ( set -e; <&- read foo ; echo notSeen >&2 ) bash: read: read error: 0: Bad file descriptor

$ :|(TTY=/dev/$(command \ps -o tty= -p $$);exec <$TTY;read -p '> ';echo got: $REPLY) > hi got: hi

$ printf "z\000j\000a"|sort -z |od -c 0000000 a \0 j \0 z \0 --snip $ printf 'hi\000ho\000'|while read -r -d "" foo ;do echo $foo;done hi ho

A bash function “_sa" ( as in “sane” ) using vim, that has been working for me:

_sa () { : -------------------------------------------------------------------- : Synopsis: reset terminal, terminal reset, sanity reset. : Warning: has hardcoded: 'stty sane erase ^H', and depends on vim : -------------------------------------------------------------------- [[ ${OSTYPE:-} = cygwin ]] || { reset; : in one case reset fixed line-drawing characters snafu } stty sane erase ^H vim +:q # Has side affect of fixing up terminal. }

$ printf '\000hi\000' > foo $ wc -c foo 4 foo $ echo -n "$(<foo)" | od -c 0000000 h i 0000002

See bash ‘help set’. Not sure where ‘set – ARGS’ is documented.

# compare:

set -- $ans # vs set - $ans # /1st better when $ans is undefined

example:

$ echo $BASH_VERSION 4.1.10(4)-release $ set -- -foo $ echo $1 -foo $ set - $ echo $1 -foo $ set -- $ echo $1/ /

$ (set -e; foo(){ false; echo hi; }; foo ) # Works ok if in simplist form. $ echo $? 1 # Three "not safe" examples: $ (set -e; foo(){ false; echo hi; }; if foo; then :;fi; ! foo; foo || : ; foo && : ) hi hi hi hi

Simple statements calling function ‘foo’ are not a problem, but notice that some compound statements like:

if foo ... ! foo foo || : foo && :

effectively disable ‘set -e’ (errexit flag) within function ‘foo’.

Consider avoiding a dependency on ‘set -e’ in your functions.

Related links:

Despite how negative the above threads are I think ‘set -e’ is still useful.

$ type -a _login _login is a function _login () { : --------------------------------------------------------------------; : Synopsis: Start new bash login shell using 'env -i ...' which minimizes; : environment vars picked up by new shell. 'SSH_' related vars for; : example will not be inherited. PATH also is fresh.; : --------------------------------------------------------------------; env -i USER=$USER HOME=$HOME TERM=$TERM $SHELL --login }

$ (: $* is immune from set -u; set -eu;set --; echo "$# [$*]") 0 []

‘set -u’ does not apply to unexecuted code

$ (set -eu;[[ -z $PATH || -n $bar ]]; echo hi ) -bash: bar: unbound variable $ (set -eu;[[ -n $PATH || -n $bar ]]; echo hi ) # short circuit op works, no err for nounset :-> --snip $ ( set -eu; if false;then : $bar;fi;echo hi ) hi $ ( set -eu; if true;then : $bar;fi;echo hi ) bash: bar: unbound variable $

Linux ‘ps -p PID…’ supports multiple pids

$ command ps -wwH -o pid,ppid,sess,user,tty,state,bsdstart,args -p 1 4 PID PPID SESS USER TT S START COMMAND 1 0 1 root ? S Feb 22 init [3] 4 1 1 root ? S Feb 22 [watchdog/0]

uqjua SCRIPTS_OVERVIEW

synopsis of the best scripts

uqjau.tar.gz: >200 GPL’d bash scripts;perl scripts; bash functions…

home: http://www.nongnu.org/uqjau/README.html#README

README

interactive bash function library; scheme to manage bash login sequence

download: http://trodman.com/pub/iBASHrc.tar.gz

README

A scheme for managing ~/.{bashrc,bash_profile} and other 'rc' files. A suite of over 160 day to day sysadmin/general bash functions, 100+ aliases, and several ~/.applicationrc files; for interactive use in Linux, and Cygwin. Supports approach for managing functions, aliases, and env vars on multiple hosts (selectively sharing code). Typically, I update the tar archive (content) at least once per week. The login sequence is broken up into *many* separate files, that are sourced. Host specific modifications are placed in a sub directory named './noshar', so all else can be shared across hosts. Run the '_lsq' (login sequence) bash function, to get an idea of the flow. It's ugly/messy/a bit fragile, but I use it every day, on several hosts. For now I suggest you just look it over for ideas. I have no design docs, but it is reasonably commented. Although it should be safe to install on your primary (non root) account, but it's a major set of changes, so I suggest you create a new account to test it. These start up routines have some dependencies w/my GPL'd bash shell scripts: http://www.nongnu.org/uqjau/README.html#README Hope you get some idioms/ideas from the code. BUGS: Some bash functions (and aliases?) are included that will not work without uqjau tools installed; and some of the tools are very provincial. I will try to move them out as time permits. 'set -e' is enabled for most of login sequence so and failing command will abort your login ;easy to change this, but be warned!

If you like UNIX cp, 'cp -r', 'mkdir -p, and touch; you have to use windows and you want destination files and dirs w/normal windows permissions...

Take a look at these bash cygwin wrapper scripts:

_wtouch

_wmkdir

_cp

_cpd

I use them frequently so the're reasonably mature. The're part of http://trodman.com/blog/#uqjau

_cp is available in $_lib/_cp.shinc; it will be also automatically loaded as a shell function if you install http://trodman.com/pub/iBASHrc.tar.gz

Above approach applies to $_lib/_wtouch.shinc, and $_lib/_wmkdir.shinc.

_cpd is a script which will be in your PATH.