Joshua's Docs - Bash / Shell - Cheatsheet

Resources

What & Link Type
SS64 Bash Reference Docs
The Bash Hackers Wiki

(previously wiki.bash-hackers.org, now archived)
Docs / Wiki / Quick Ref
Wooledge / GreyCat: Bash Reference Sheet, Full Bash Guide Cheatsheet, Guide
GNU Bash Reference Manual Docs (giant one-pager)
DevHints: Bash Cheatsheet Cheatsheet
LinuxIntro.org: Shell Scripting Tutorial Cheatsheet / Quick Reference
ExplainShell.com
- breaks down any given command and explains what it does)
Interactive Tool
TLDR Sh Simplified man pages, open-source, which you can read online or access directly in the terminal
CompCiv: Bash Variables and Command Substitution Guide to variables, interpolation with strings, and related features
man7: Linux man Pages Docs
The Art of the Command Line Cheatsheet
Amber - typed language that compiles to bash GitHub Repo

Formatting and Linting

Checkout shellcheck for static analysis and mvdan/sh for parsing and formatting.

Breaking Up Long Commands with Backslashes / Multi-Line Commands

For really long commands (multi-line / multiline commands), it can make your scripts easier to read if you break up the text with linebreaks. You can use the backslash character (\) to separate lines while still running as a single command:

printf "Basket:\n1 Bag Flour\n12 Eggs\n1 Carton Milk" | grep \
	-E \
	-i \
	-n \
	"[0-9] Eggs"

# Indenting works too
openssl req \
	-newkey rsa:4096 -nodes \
		-keyout domain.key \
	-x509 \
		-sha256 -days 365 \
		-subj "/C=US/ST=WA/L=SEATTLE/O=MyCompany/OU=MyDivision/CN=*.domain.test" \
		-addext "subjectAltName = DNS:*.domain.test, DNS:localhost, DNS:127.0.0.1, DNS:mail.domain.test" \
		-out domain.crt \

You might be tempted to try and add inline comments between lines, but be warned that this will break your script - this does not work:

# This will NOT work
my_command \
	# comment about arg
	-E

⚠ Be careful about line-endings and whitespace when terminating lines with backslash. If you accidentally include extra whitespace or any other characters at the end of a line, you will encounter issues.

Line Ending Issues - Windows vs Unix

A common issue with file portability is line endings between windows (CRLF, \r\n) and Unix (LF, \n). Files created with one line ending can cause issues when read back on a system that expects another.

To check the line endings of a file, you can use file MY_FILE. If you see a return that includes with CRLF line terminators, the file includes Windows style line endings, \r\n, and might cause issues if read back / executed on a Unix system.

Your options for fixing a file with CRLF endings are:

  • Using the dos2unix program
  • Using sed:
    • sed -i '' 's\r$//' YOUR_FILE
  • Manually patching the file in a text-editor (do a search and replace for \r\n with \n).

You will sometimes see people recommend that you use tr to remove carriage returns; I would not recommend this, as it will remove them everywhere in the file, which can come back to bite you if you use them in log statements or for other purposes.

Configuration

Special Shell / Bash Files:

File Conventional Usage
~/.bash_profile (or ~/.profile) Store environment variables, to be loaded once and persisted, modify $PATH, etc. Also typically contains the code to load .bashrc

Important: Is only read & executed for interactive login shells, meaning forks / child shells will not reload it. Thus, use the file for things you want to load once (like environment variables), but not things to load every time (like aliases and functions).
~/.bashrc Store aliases, functions, and pretty much anything custom OR load those customizations from external files via source. This file is itself executed via source, automatically by bash.

~/.bash_aliases Store aliases, to be loaded into every new shell
~/.bash_prompt For customizing the shell itself (appearance, etc.)

This page from Baeldung explains some of the differences between various bash startup files in greater detail than above.

If you use zsh instead of bash / sh, most of these files are not actually read by default. If you are using Oh My Zsh, you can auto-load any file ending in .zsh by placing it (or symlinking) within the $ZSH_CUSTOM directory. If you are not using it, or just want something more custom, to have zsh read them by default, add lines to your ~/.zshrc file that load them. For example, to load .bash_aliases, you could add:

[ -f ./.bash_aliases ] && source ./.bash_aliases

Or, for a slightly cleaner approach, store the path as a variable first, so it is not repeated.

Dotfiles

See my cheatsheet: Dotfiles

Also, my dotfiles can be found at joshuatz/dotfiles.

Aliases

To create an alias, use the alias command:

alias alias_name="actual_command_that_executes"

For example, if we have some favorite flags to use with ls:

alias list="ls -halt"

If you need an alias that accepts arguments and then passes them to the middle of another command, you are better off writing a function. There are some ways to accomplish this with just aliases, but they are less straightforward.

💡 Aliases don't have to go in .aliases file, that is just the most common place for them

Functions

# Simplest form
my_function() {
	# body
}

# You can use the function keyword, but don't have to and this is less portable
function my_function() {
	# body
}

Take care when writing functions to not mix up exit and return; exit will exit the entire script, usually closing your terminal window if unhandled, whereas return is what you usually want and can still be used to return exit codes (.e.g. return 1 or return 0).

Both when executing a function from within a shell script, as well as from the terminal, you execute the function by calling it by name followed by arguments, without parenthesis (unlike most other languages):

say_hi() {
	NAME=$1
	echo "Hello ${NAME}"
}

say_hi "Fred"

To execute a function from the command line, the only extra step is that you need to read the script in via source first. E.g.:

source custom_functions.sh
my_function

NOOP / No-Operation Use in Shell Scripts

For something that you can call with arbitrary inputs, without throwing an error, : is generally the most portable option. However, it is far from the only option:

  • :
  • true
  • false (if you always want a non-zero exit)
  • For piping and preserving output, you could use:
    • cat
    • tee

Processing User Input

Confirmation Prompts and Selection Prompts

There are multiple ways to do shell confirmation prompts / selection prompts / list pickers (shellhacks, SO) (e.g. Continue? Yes / No).

The easiest to use is often the select built-in:

doc_options="$(ls ./docs)"
PS3="Select document:"
select doc in "$doc_options"; do
	if [[ -n $doc ]]; then
		break
	fi
done
echo "You selected $doc"

# You can use arrays too:
doc_options=("README" "Getting Started" "Interfaces")
select doc in "${doc_options[@]}"; do
if [[ -n $doc ]]; then
	break
fi
done
echo "You selected $doc"

However, it is not very flexible, so it is worth discussing alternative options:

Here is a somewhat standard approach:

while true; do
	read -p "Continue? [Yy]es, [Nn]o?" yn
	case $yn in
		[Yy]* ) break;;
		[Nn]* ) exit;;
	esac
done
echo "You made it through!"

# If you want to do more things within each case, just use standard operators before. E.g.:
# [Yy]* ) echo $PWD && break;;

If you want to include a line break with read, one approach is to use the ANSI-C quoting style with a literal line-break: read -p $'question?\n' answer

You can trade break and exit for the various actions you want to perform, but keep in mind you will need break at some point to continue on with the script and exit the loop.

The purpose of using the while true loop is that it covers the edge-case where the user types something that matches neither of the two cases - it traps them until they do.

If you need a more simple "Press _ to continue" confirmation prompt, you don't need to involve a while-loop to check output:

read -p "Press enter to continue"

read -n 1 -s -r -p "Press any key to continue"

To combine a check for enter with other options, check for an empty string:

use_feature=false
while true; do
	read -p $'Would you like to use that feature? [Yy]es / ENTER, [Nn]o, [Cc]ancel\n' answer
	case $answer in
		[Yy]* ) use_feature=true && break;;
		"" ) use_feature=true && break;;
		[Nn]* ) use_feature=false && break;;
		[Cc]* ) exit;;
	esac
done
echo "use_feature = ${use_feature}

Choice Selection with FZF

If you are writing a lot of scripts that require the user to make a choice between different options, FZF is a great tool to have in your toolbox (and for other use-cases as well).

You pipe options to it via stdin, and then can get the user's choice via stdout:

recent_commits=$(git log --pretty=format:"%h %s" -n 20)

# Use fzf to interactively select a commit
picked_commit=$(echo "$recent_commits" | fzf --reverse --ansi)

Processing Flags, Options, and Arguments

Whether you are receiving arguments to a shell script itself, or passing to a function, there are a few common tools for parsing arguments and flags within bash. The popular solution is the getopts command. The common pattern for usage looks something like this:

# getopts OPTSPEC VARIABLE [arg]
# sometimes `OPTSPEC` is called `OPTSTRING`

while getopts ":dv:f" opt; do
	case "${opt}" in
		d) DEBUG=true ;;
		v) VERBOSE=true ;;
		f) FILE="${OPTARG}" ;;
		\?)
			echo "Invalid option: -${OPTARG}" >&2
			;;
	esac
done

The leading : in the OPTSTRING of the above example suppresses the built-in error reporting for invalid flags. Leave it out if you don't want to suppress these.

🤔 getopts vs getopt? Contentious topic, but getopts is built-in, while getopt is not. Unfortunately getops does not support long argument names (easily), but getopt does. This post summarizes some more differences.

If you are passing arguments and/or flags to a function within a shell script, make sure you call the function like myFunction "$@".

getopts: Parsing Long Options without using getopt

As previously mentioned, although it is nice that getopts is "built-in", it doesn't support parsing long options (e.g. --file instead of -f). However, there are some workarounds that don't require getopt.

Example of manual parsing code - using while, $#, and shift
VERBOSE=false
while [[ ! $# -eq 0 ]]
do
	case "$1" in
		--verbose|-v)
			VERBOSE=false
			;;
		# You could leave off this case if you want to allow extra options
		*)
			echo "invalid option ${1}"
			;;
	esac
	shift
done

A downside to the above approach is that it uses shift to deplete the arg list until it is empty. This is undesirable if you need to pass along the arg list to another place. If that is the case, you can use a standard for-loop (see below), although that also makes things trickier if you need to extract n+1 value for a match.

Kudos to Jon Almeida's blog post and this StackExchange answer for pointing in the right direction. This is also similar code to that produced procedurally by Argbash.

Example of manual parsing code - which preserves non-matched values

You can use:

  • separate array (preserved_values=()), and match with *), and use double-shift on matches OR shift once on *) and keep outer shift (not in case)
    • Doing the double-shift on match and single on wildcard is actually safer than single on both and single outside of case, because it avoids edge-case where it tries to shift on an already empty array
  • Separate counter / done var
Example of manual parsing code - using for and do
for var in "$@"
do
	echo "var = ${var}"
done

The below code is very similar but exploits the fact that arg gets evaluated like arg in "$@":

for arg
do printf "arg = ${arg}\n"
done

Checking Args with Pattern Matching

# Problematic - requires space on both sides
if [[ $* == *--log* ]]; then
	:
fi

# Problematic - matches `--log` AND `--log-level
if [[ $* == *--log ]]; then
	:
fi

# Fairly robust
if (echo "$*" | grep -qE '(^|\s)--log($|\s)'); then
	:
fi

# Using a for-loop
check_args_for_value() {
	local search_value=$1
	shift
	local arg_arr=("$*")
	for arg in "${arg_arr[@]}"; do
		if (echo "$arg" | grep -qx -- "$search_value"); then
			return 0
		fi
	done
	return 1
}
# Or
check_args_for_value() {
	local search_value=$1
	shift
	for arg in "$@"; do
		if (echo "$arg" | grep -qx -- "$search_value"); then
			return 0
		fi
	done
	return 1
}

Current directory:

echo $PWD
# > /users/Joshua/projects

With just the folder name:

basename $PWD

Including the "Hash-Bang" / "shebang"

#!/bin/bash
  • ^ Should go at the top of sh files.

Sometimes you will see flags included as part of the shebang. For example, you can use -e (errexit) to have the script exit immediately if a command fails:

#!/bin/bash -e

For portability, this is the preferred format:

#!/usr/bin/env bash
set -e

Commenting

# My comment

Logic / flow

📄 Wooledge Guide

Test

Before using advanced branching logic, you should know that the shell has a built-in test check - basically coerces the output of an expression to a boolean that can be used in logic flows. Simply encase the expression/condition in brackets:

test check-this-condition

# alternative syntax:

[[ check-this-condition ]]

It is generally recommended to use double brackets (the newer version), even though [ check-this-condition ] is a roughly equivalent syntax.

There are lots of different conditionals you can test against.

The man page for test is pretty useful for a quick reference: man test

For example:

  • Is variable set (has value)?
    • [[ -n $MY_VAR ]]
  • Does file exist?
    • [[ -e file.txt ]]
  • Does directory exist?
    • [[ -d ./dir/ ]]

To invert / negate the test expression, use ! (exclamation mark).

For negation, where you place the ! mark is important. If you place it outside the brackets, like ! [[ condition(s) ]], then it negates the entire clause inside the brackets after it is evaluated. If you place it inside, it negates individual sections of the clause, before evaluating. This is the same as how logic generally works with parenthesis in most programming languages.

If you are just trying to check pass / fail, use if instead of test / [].

Test - Edge-Cases with bash test checks and confusing behavior

Boolean Checks

One very important thing to note about test is that it does not do any sort of boolean coercion for you, which can be rather confusing as booleans don't really exist in bash to begin with:

Test Statement Evaluates To
[[ false ]] Success / true
test false Success / true
[[ 0 ]] Success / True
[[ 1 ]] Success / True

Numerical Coercion and Zero-Equality Checks

Although test doesn't do boolean coercion, it does do some very funny things with numerical coercion. Most importantly, empty string in bash have a numerical value of zero. And unset variables get resolved to the same thing.

So, in effect, $empty_or_unset -eq 0 becomes 0 -eq 0, which always evaluates to true.

This can lead to some very confusing behavior:

Test Statement Evaluates To
[[ "" -eq 0 ]] Success / true
unset my_var && [[ $my_var -eq 0 ]] Success / true
unset my_var && [[ "$my_var" -eq 0 ]] Success / true
unset my_var && [[ $my_var -eq "" ]] Success / true
unset my_var && [[ "$my_var" -eq "" ]] Success / true

Branching / Conditional Execution

Great guides

Basic example:

if [[ -n $NAME ]]; then
    echo "Hello ${NAME}!"
else
    echo "I don't know your name :("
fi

For an else if block, use elif (i.e. elif COND; then)

If you want to evaluate a command as part of an if statement, but in a more isolated approach, you can wrap the command with single parenthesis to execute it in a subshell. Like so:

if (which node); then
	echo "Node version = $(node --version)"
fi

To silence the output of the command on success, you can use > /dev/null to redirect the output.

E.g.,

if (which node > /dev/null); then
	echo "Node version = $(node --version)"
fi

Finally, you can also create one-liner / inlined if-then clauses:

[[ $SHELL == "/bin/zsh" ]] && echo "You are using ZSH"

Early Returns

Like many programming languages, bash has a return keyword, which can be used inside functions.

Technically, you can also use return in a script that is run through source (or .), but best practice would probably be to only use it within functions.

If you want to "return early" within a script (outside of a function), you can use the exit keyword.

Case

📄 case guide from Bash-Hackers

📄 case guide from Wooledge

Important: For default usage (e.g .;; endings, not ;;&), case stops matching patterns as soon as one is successful

Ternary Operator / Conditional Expression

In many languages, there is support for something called the ternary operator, or conditional operator. It usually looks like this (this is not bash, just an example):

// JavaScript:

// Do something based on ternary
userIsAdmin ? print('Welcome Admin!') : print('Welcome!')

// Assign based on ternary
const userType = userIsAdmin ? 'Admin' : 'User';

In bash, this can be accomplished by the syntax of TEST && DO_IF_TRUE || DO_IF_FALSE. Like this:

$user_is_admin && echo "Welcome Admin!" || echo "Welcome!"

🚨 Warning: This only works as long as the thing you want to do if the conditional is true always exits with exit code 0 (success)

Helpful S/O's for understanding the above: 1, 2

For assignment, just wrap the entire execution in a command substitution parenthesis block:

user_type=$($user_is_admin && echo "Admin" || echo "User")

# You can use more advanced conditional checks

LOG_OUT=$([[ -n $LOG_PATH ]] && echo $LOG_PATH || echo "/dev/stdout")
echo "Starting program..." >> $LOG_OUT

Exit Codes

Checking Exit Codes

You can use $? to get the last exit status. Here are some examples.

Reminder: Anything other than 0 is an error ("non-zero exit code").

Storing Exit Codes

You can't use command substitution ($()) to store exit codes, because it will instead capture stdout. Instead, you can run your command in a subshell and then either directly check $? or store it into a variable.

E.g.:

(command_to_capture_exit_code)
captured_exit_code=$?

# Or, something like
(command_to_capture_exit_code)
has_error=$([[ $? -ne 0 ]] && echo "yes" || echo "no")

Ignoring Errors and Non-Zero Exit Codes

If you want to ignore a non-zero exit code and have it not halt your program, there are a few different options:

# Use sequential execution, regardless of success
(command_that_might_fail; exit 0)

# Use standard boolean logic
(command_that_might_fail || true)

This works even with scripts, or sourced scripts, that contain -e. For example:

# script.sh
set -e
fail() {
	exit 1
}

# Main shell
source ./script.sh
fail || true

Try / Catch and Error Handling

For skipping errors entirely, see this section.

If you just want to perform a certain action if a command fails, you can wrap it in an if block:

if ! [[ $(stat myfile.txt) ]]; then
	echo "File does not exist"
fi

# You can also skip the `[[ ]]` test syntax and use regular parenthesis,
# but note that this won't silence stdout from the command, in the same way
# that command substitution does

if ! (stat myfile.txt); then
	echo "File does not exist"
fi

Note that this will prevent an error from forcing an early return / exit, but will not suppress any stderr / error messages from appearing. To do that, you can use 2>/dev/null to redirect the stderr output and silence it.

Example:

if ! [[ $(stat myfile.txt 2>/dev/null) ]]; then
	echo "File does not exist"
fi

Finally, another approach you can use is executing the command in a subshell, and having it exit on error. E.g.:

echo "Trying to list dir"
(
	cd "$dir_that_might_not_exist" || exit
)
echo "Done"

However, using a subshelll has a major disadvantage of not being able to touch the parent environment - you can't set variable values of the parent shell from the subshell:

my_num=1
(
	my_num=5
)
echo "my_num = $my_num"
# > "my_num = 1"

Short Circuit Logic

Short Circuit Assignment (also for defaults)

In certain languages, you can use short circuit assignments to assign the value of a variable, and fallback (aka default) if it is undefined. Something like this:

const name = userInput || "New User"

In bash, there are two main ways to accomplish this kind of task. The first is with shell parameter expansion:

: ${name:="default"}
name=${user_input:-"New User"}

# Or, if we want to re-use same variable
: ${user_input:="New User"}

Shell parameter expansion can also be used with local variables:

list_dir() {
	local dir_to_list="${1:="."}"
	ls "$dir_to_list"
}

The second way is to use a conditional expression, although this is not as concise:

name=$([[ -n $user_input ]] && echo $user_input || echo "New User")

Short Circuit If-Else

This works this same as the trick for emulating a ternary operator - use the logical OR operator between commands:

TRY_A || TRY_B

🚨 Warning: Same as the ternary trick, this only works if the first command always exits with exit code 0 for success

You will often see curly braces used this with this shortcut, like so:

{ TRY_A; } || { TRY_B; }

# Or
{
  TRY_A;
} || {
  TRY_B;
}

Keep in mind that the curly braces are only used in this context for organizing the code into blocks (as a form of command grouping); it does not create a sub-shell, and variables are still global.

Double Pipe vs Double Ampersand, and Other Flow Operators

Quick reference:

  • && (double ampersand) = Only execute right side if left side succeeds
    • Examples:
      • false && echo "this text will NOT show"
      • true && echo "this text will show"
  • || (double pipe) = Only execute right side if left side FAILS (non-zero exit code)
    • Essentially the inverse of &&
    • Examples:
      • false || echo "this text will show"
      • bad_command || echo "this text will show"
      • true || echo "this text will NOT show"
  • & (ampersand) = asynchronously runs both commands on either side, in parallel, regardless of success of either, in detached (forked) processes
    • Warning: This can be a hackish way to do things
    • Definitely do not use this if the second command is dependent on the output of the first
    • Examples:
      • slow_command_to_execute & echo "this will appear before the left side is done!"
      • true & echo "this text will show"
      • false & echo "this text will also show"
    • If you need to kill all processes on exit (e.g. SIGINT / CTRL + C), you can use:
  • ; (semicolon) = Execute both side, sequentially, regardless of success of either
    • Examples:
      • true; echo "this text will show"
      • bad_command; echo "this text still shows"
    • Since this doesn't work in many Windows environments, an easy workaround to get the same behavior is to replace CMD_ONE; CMD_TWO with (CMD_ONE || true) && CMD_TWO.
      • This exhibits the same behavior, since CMD_TWO will also synchronously execute after CMD_ONE, regardless of its success
      • Great for NPM scripts
      • Nice writeup
  • : (colon) = acts like an alias to true, and is often used as noop statement (does nothing, successfully)
  • | (pipe) = Not for logic flow, used for redirection

Arrays, Lists, and Loops

🚨 Warning: Different shells (e.g. bash vs zsh) tend to disagree on conventions and syntax for arrays

https://opensource.com/article/18/5/you-dont-know-bash-intro-bash-arrays

Example for loop:

possible_paths=(
	'./my_file_a.txt'
	'./my_file_b.txt'
	'./my_file_c.txt'
)
for file_path in "${possible_paths[@]}"; do
	if [[ -e "$file_path" ]]; then
		echo "Found file ${file_path}"
		cat "$file_path"
		break
	fi
done

# If the list is short and only needs to be referenced once,
# there is a shorter syntax you can use
for dog in fido lassie
do
	echo "${dog} is a good doggie!!!"
done

Constructing Arrays in Bash

If you already have a loop going somewhere, using the my_arr+=(new_element) is cross-shell compatible and easy to work with.


Using standard loops

This works in BOTH bash and zsh!

my_arr=()
for i in {1..10}; do
	my_arr+=($i)
done

However, often in shell scripting you already have some form of a string variable or standard out that you want to transform into an array. For that, here are some additional approaches you can take:

Using word splitting

🚨 Relying on word splitting is shell-specific.

Both bash and zsh support word-splitting, but zsh has it disabled by default

sentence="hello world"

# Bash
words=($sentence)

# ZSH
words=(${=sentence})

# Or, to emulate the behavior of bash
setopt SH_WORD_SPLIT
words=($sentence)
unsetopt SH_WORD_SPLIT

Using Parameter Expansion

🚨 This is ZSH only.

ZSH supports an advanced parameter expansion feature that lets you specify a pattern for splitting (field splitting) directly during expansion:

# Format is ${(S + DELIMITER + SPLIT_PATTERN + DELIMITER)YOUR_VARIABLE))
# Split on space
words=(${(s/ /)sentence})
# Split on slash
sentence="a/b/c"
words=(${(s:/:)sentence})

Using read

This works in BOTH bash and zsh!

# You don't have to declare variable `my_arr` first
# (but could)
while read -r line; do my_arr+=("$line"); done < <(STD_OUT_PRODUCER)

# Example, using `2>/dev/null` to suppress errors
active_sessions=()
while read -r line; do active_sessions+=("$line"); done < <(tmux list-sessions -F "#{session_name}" 2>/dev/null)

Check if element is in array

Using a regular for-loop

I would recommend this, as the go-to method for element checking, since it less error-prone and more performant than other methods:

for item in "${my_array[@]}"; do
	if [[ "$item" == "$needle" ]]; then
		echo "Found $needle!"
		break
	fi
done
Checking against the entire expanded array

🚨 Warning: This approach has two serious "gotchas"

This approach relies on expanding the entire array as a single string and then checking for a substring match. On the surface, it appears like a nice solution because it is succinct:

if [[ " ${my_array[@]} " =~ " needle " ]]; then
	echo "Found needle in array!"
fi

Gotcha 1: However, this approach has a serious flaw / gotcha: since this is working by basically concatenating every element together, this doesn't work correctly if any array element itself contains spaces, as then you are unintentionally doing sub-element matching. Consider the following example:

my_array=(alpha bravo "charlie delta")
echo "Array length = ${#my_array[@]}"
if [[ " ${my_array[@]} " =~ " charlie " ]]; then
	echo "charlie is in array"
fi

If ran, this will tell you that "charlie is in array", which is technically true, but only as a substring within an element, which is most often not what one is interested in when doing array element checks.

Gotcha 2: For really large arrays there is probably a performance hit here, in comparison with a for-loop, since you are expanding and concatenating the entire array regardless of how early a match might be found.

Check if Array is Empty or Not

To check if an array is empty or not, you can use ${#my_array[@]} to refer to the array length.

if [ ${#my_array[@]} -eq 0 ]; then
    echo "Array is empty"
else
    echo "Array is not empty"
fi

Printing Arrays

To print ALL elements of an array, you can use:

# With line breaks between elements
printf '%s\n' "${my_array[@]}"

For an even more verbose approach (e.g., to help debug issues), you can use declare and its print option:

declare -p my_array

Joining Arrays into a String

The most succinct way to to concatenate every element of a bash array into a single string, is to use parameter shell expansion:

concatenated_str="${my_array[*]}"

Waiting Until Something Is Ready

A common DevOps task is delaying running a step in a workflow into a process / file / endpoint is ready. For instance, if you are developing server code, you might launch a live instance of it in a CI/CD workflow and need to delay your test command until the server is ready to receive requests.

Depending on what you are waiting for, there might be more efficient ways (for example, using inotify for files), but the one-size-fits-all approach is to use a loop that does not exit / continue until your condition is reached, usually combined with a small delay (like sleep 1).

In many cases, you might want to add in a timeout / maximum iteration condition. Especially if this is for a CI/CD environment that can rack up large bills if left running continuously 😉. The below examples include this, doing so with a while loop approach:

MAX_LOOPS=10
LOOPS=0
while ! [[ -e my_file.txt ]] && (($LOOPS < $MAX_LOOPS)); do
	LOOPS=$(( LOOPS + 1 ))
	>&2 echo "File does not exist. Waiting... loop #${LOOPS}"
	sleep 1
done

# At this point, either test was successful, or MAX_LOOPS was reached
# If this is important for next step, re-test for correct exit code
[[ -e my_file.txt ]]

and also with a until loop (but with inverted test logic)

MAX_LOOPS=10
LOOPS=0
until [[ -e my_file.txt ]] || (($LOOPS >= $MAX_LOOPS)); do
	LOOPS=$(( LOOPS + 1 ))
	>&2 echo "File does not exist. Waiting... loop #${LOOPS}"
	sleep 1
done

# At this point, either test was successful, or MAX_LOOPS was reached
# If this is important for next step, re-test for correct exit code
[[ -e my_file.txt ]]

Text Matching and Regular Expressions

Grep

🚨 Warning: If your search pattern / phrase contains a hyphen (-), use -- to separate your grep flags from the search pattern, or else grep will try to parse the search pattern as flags!

E.g. echo "foo-bar" | grep -- "-bar"

🚨 Warning: A very big catch with grep is that it operates on a per-line basis; this (generally) makes it non-ideal for things like checking line ending patterns

  • In general, if you are a RegEx power user, you will probably find sed much preferable. Or awk.

    • grep can actually be a bit of a pain when trying to do things like use capture groups (1, 2)
  • Cheatsheets:

  • (Common) Flags:

    Flag Description
    -E Extended regex
    -q Quiet / silent mode; don't print anything, and just return code based on success / failure
    -o Only output the matching part of the line
    -p Treat as perl style regular exp
    -i Ignore case
    -v, --invert-match Invert the matching (filter to those that don't match)
    -e Pass explicit patterns, multiple allowed
    -n Show line numbers for matches
    -A {num} | -B {num} Show {num} lines before / after match
    -F Assume input is fixed strings, meaning don't treat as regex pattern (useful if you are looking for an exact match, and your search string contains RegEx chars like .)
    -x, --line-regexp Require the entire input line to match the pattern (can be useful for exact file-path matching, although just using $ to match EOL can be useful instead as well)

Grep - Print Only Matching

The -o flag will do this.

On some systems, it also adds line breaks, even with a single result. For removing the line break for single result outputs:

grep -o '{pattern}' | tr -d '\n'

# Example
echo hello | grep -o '.ll.' | tr -d '\n'
# prints 'ello'

Grep - How to Grep Against a Variable

Just echo and pipe it to grep:

echo $MY_VAR | grep {GREP_OPTS}

Grep - How to Grep Against a File

grep {GREP_OPTS} {FILE_PATH}

sed

If you only need to replace a single character, tr might be a better fit. For example, tr '\r' '\n' to replace CR with LF.

  • Cheatsheets

  • Common flags

    Flag Description
    -n Silent, suppress printing of pattern space
    -r
    (or -E on some systems, like macOS)
    Use extended regexp - I always prefer
    -i Edit in place. Can be used to edit a file directly, e.g. sed -i {PATTERN} {FILE}.
    Highly recommended to use --follow-symlinks with this.
  • Syntax

    • print output
      • echo $'hello world\nLine Two\nGoodbye' | sed -rn '/Line.+/p'
        • Prints "Line Two"
    • substitute
      • echo $'hello world\nLine Two\nGoodbye' | sed -r 's/Line.+/my substitution/'
        • Prints:
          • hello world
            my substitution
            Goodbye
      • Example: Replace space with newline
        • echo 'item_a item_b' | sed -r 's/ /\n/g'
        • Prints:
          • item_a
            item_b
      • Example: Inject a tab at the start of every line
        • printf "line_a\nline_b" | sed -r "s/^/\t/g"
      • Example: Replace null character with new line
        • sed -r 's/\x0/\n/g'
    • Substitute in file, directly
      • sed -i --follow-symlinks "s/FIND/REPLACE/g" {FILE}
    • Print only a specific capture group
      • This is actually a little complicated. Basically, you have to substitute the entire input with the back-reference of the capture.
        • sed -rn 's/.*(My_Search).*/\1/p'
      • In action:
        • echo $'hello world\nLine Two\nGoodbye' | sed -rn 's/.*^Line (.+)$.*/\1/p'
          • Prints:
            • "Two"

🚨 One major problem with sed is that it is line-based, which means it has some limitations and caveats. The biggest being that multiline matching is almost impossible, without using overly complicated commands (at least, without using GNU sed).

Workaround: When trying to replace newlines, use $ instead of \n,

Warning: sed on your system might have limitations - for example, be warned that if you can't use the Perl (-p) mode, you will need to use [0-9] instead of /d, for digits.

Other Scripting Languages

Honestly, I find that both sed and grep have a lot of shortcomings that make them difficult to use in complex scenarios. Another approach you can reach for is using a different scripting language and runtime to execute your regex matching in.

Here is an example where we are trying to capture a column of tabular data - the goal should be print out 5 in the terminal.

HAYSTACK=$(cat << "EOF"
	Days On | Days Off
	---
	5 | 2
	Total = 7
EOF
)

# One-liner
echo "$HAYSTACK" | xargs nodejs -p "/---\n[\s\t]*(\d+)/.exec(process.argv.slice(1).join('\n'))[1]"

# More readable, using another heredoc
# Note: Technically, we could interpolation to put string in heredoc, but it
# gets *really* messy with escaping
HAYSTACK=$HAYSTACK node << "EOF"
const haystack = process.env.HAYSTACK;
const results = /---\n[\s\t]*(\d+)/.exec(haystack);
console.log(`You have ${results[1]} days off!`);
EOF

You can also use perl as a (somewhat) drop in replacement.

JQ

For main docs, refer to the official jq page, or use jq --help or man jq.

The main way to call jq is to pipe data to it:

COMMAND_A || jq JQ_PATTERN

Or, to pass it a file as the final argument:

jq JQ_PATTERN INPUT_FILE

For example, here are two different ways I could get the package name of a NPM package I own:

# Pipe to jq
curl https://raw.githubusercontent.com/joshuatz/j-prism-toolbar/main/package.json | jq '.name'

# use local file
jq '.name' package.json

🚨 jq does not handle JSON with comments (jsonc)

Capturing Things

Capturing and Executing Output

If you simply want to "capture" the value output by a script and store it as a variable, you can use substitution (aka command substitution). See "Storing into a variable".

If you want to execute the output of a command as a new command / script, you can use the (dangerous) eval command, plus substitution: eval $( MY_COMMAND ).

Here is a full example:

(echo echo \"in test.txt\") > test.txt
eval $( cat test.txt )
# "in test.txt"

Capturing Last Command

You can use the fc command to read back past commands that were executed. So, to get the very last command that was executed, you can do:

last_command=$(fc -ln -1)

Capturing Output of Past Commands

By default (ⁱ), bash (or zsh) does not capture the stdout of past commands. However, if you don't mind re-running the command, you can use a shortcut to save some typing - since !! is a shortcut for executing the last command, you can do things like this:

echo "Re-run of command results = $(!!)"

If you are using tmux inside of your shell, you can get past output by reading into the scrollback buffer:

ⁱ There are other approaches, like this one, for adding helper functions to tee the output to a file

Capturing Input Arguments in Shell Scripts and Passing Around

Alt-heading: Storing Arguments and Passing Them to a Command

Arguments (aka positional parameters) are automatically captured and stored within shell scripts / functions (these are a form of Special Parameters):

Variable Contents
$# Argument Count (number)
$1, $2, ... Specific arguments, by order passed in
$@ All of the arguments (separation preserved, even if double-quoted)
$* All of the arguments (concatenated)

Make sure to double-quote when using $@ - e.g.:

# say_hi.sh
YOUR_NAME="$1"
echo "Hello $YOUR_NAME, your name has "$( echo -n $YOUR_NAME | wc -m ) "characters in it"

# Run
./say_hi.sh Joshua
# > Hello Joshua, your name has 6 characters in it

Positional Parameters - Referencing All Arguments - $@ vs $*

Guide: Bash Hackers Wiki - Handling Positional Parameters

Standard shells have built-in variables to reference all arguments at once - $@ and $*. As a rule of thumb, you almost always want $@, but to expand on why:

  • $@: Preserves the separation of arguments, even when double quoted
  • $* Concatenates all the arguments, as a single string

So, when would you actually want to use $*? Well, one use-case would be when you want to concatenate all the arguments as a single string, e.g.:

$SHELL -c "$@"
# ^ Does not work - anything after `$1` is dropped
$SHELL -c $@
# ^ Does not work - anything after `$1` is dropped

$SHELL -c "$*"
# ^ Works!

Capturing Arguments as an Array / Iterating through Arguments

If you just want to iterate over arguments, you can use a simple for loop over the $@, which represents all arguments:

for arg_val in "$@"
do
	echo "$arg_val"
done

If you want to do something fancier, like mapping arguments to variables, you might find it easier to use the while + case pattern. Something like this:

FILENAMES_PASSED_VIA_ARG=()
DEBUG=0
QUESTION=""
while [[ ! $# -eq 0 ]]; do
	case "$1" in
		-q|--question)
			QUESTION=$2
			shift
			;;
		-d|--debug)
			DEBUG=1
			;;
		*)
			if [[ -f "$1" ]]; then
				FILENAMES_PASSED_VIA_ARG+=("$1")
			else
				echo "Invalid option ${1}"
			fi
			;;
	esac
	shift
done

Piping and redirection

  • Piping VS Redirection
    • Simple answer:
      • Piping: Pass output to another command, program, etc.
      • Redirect: Pass output to file or stream
  • Pipe
    • |
    • echo 'hello world' | grep -o 'hello'
      • Prints hello
  • Redirection
    • >
    • echo "hello" > output.txt
    • For appends, use double - >>

Handy cheatsheet: "Ways of Piping and Redirection" (GH Gist)

Watching Output While Redirecting to File

If you want the stdout of a program to still show up in the terminal, but also want to send it to a file or pipe it elsewhere, tee is the tool you want to use.

Examples:

echo "foo" | tee ./output.log

# The above will omit stderr, so you have to use the stderr redirection trick if you want to capture both
stat file_not_exist 2>&1 | tee ./output.log

# You can use it with other commands, like `pbcopy` to copy to clipboard
echo "foo" | tee >(pbcopy)

Problems with piping

Piping, in general, is taking the stdout of one process to the stdin of another. If the process you are trying to pipe to is expecting arguments and doesn't care about stdin, or ignores it, piping won't work as you want it to.

The best solution for this is usually to use xargs, which reads stdin and converts the input into arguments which are passed to the command of your choice.

Or, you can use substitution to capture the result of the first part of the pipe and reuse it in the second.

See this S/O answer for details.

If the input you are passing contains special characters or spaces (such as spaces in a filename), take extra care to handle it. For example, see if the thing generating the input can escape it and null terminate the fields (e.g. git-diff --name-only -z), and then you can use the -0 or --null option with xargs to tell it to expect null terminated fields.

Example: git diff --name-only -z | xargs -0 git-date-extractor
Example: find . -name '*.gif -print0' | xargs -0 python extract_gif_frames_bulk.py
Example: ls | tr \\n \\0 | xargs -0 process_file.sh

# Git
git diff --name-only -z | xargs -0 git-date-extractor

# Piping multiple files to a single command
find . -name '*.gif' -print0 | xargs -0 python process_bulk.py
ls | tr \\n \\0 | xargs -0 process_file.sh

# Same as above, but running the command over each file, using `-n1` to specify max
# of one argument per command line
find . -name '*.gif' -print0 | xargs -0 -n1 python process_single.py

# For find, you can also just run -exec with find

Printing / Echoing Output

🚨 I would recommend getting familiar with special characters in Bash when working with outputting to shell; otherwise it can be easy to accidentally eval when you meant to just print something

Also see Escaping Special Characters.

Copying to Clipboard

There are a bunch of different options, and it largely depends on what you have available on your OS.

This S/O response serves as a good list.

On macOS, it is usually pbcopy. On Linux, usually xclip -selection c.

??? - 2>&1

You see 2>&1 all over the place in bash scripts, because it is very useful. Essentially, it forces errors (stderr) to be piped to whatever value stdout is set to.✳

✳ = I'm greatly simplifying here. It's more complicated than that.

I'm not sure if this pattern has an official name, although it is very popular.

This has a few really handy and common uses:

  1. See both the output and the errors in the console at the same time
    • Often errors are routed to stderr and not shown in the console.
  2. Suppress errors
    • Since this forces errors to stdout, this has the side effect of suppressing them from their normal destination
      • However, they are still going to show up in stdout obviously. If you really want to suppress them entirely, use 2> /dev/null, which essentially sends them to oblivion
  3. Send both output and errors to file
    • If you redirect to a file before using 2>&1, then both outputs gets sent to the file.
      • ls file-does-not-exist.txt > output.txt 2>&1
        • output.txt will now contain "ls: cannot access 'file-does-not-exist.txt': No such file or directory"
  4. Send both output and errors through *pipe
    • cat this_file_doesnt_exist 2>&1 | grep "No such file" -c

On a more technical level, Unix has descriptors that are kind of like IDs. 2 is the descriptor/id for stderr, and 1 is the id for stdout. In the context of redirection, using & + ID (&{descriptorId}) means copy the descriptor given by the ID. This is important for several reasons - one of which is that 2>1 could be interpreted as "send output of 2 to a file named 1", whereas 2>&1 ensures that it is interpreted as "send output of 2 to descriptor with ID=1".

So... kinda...

  • 2>&1
    • can be broken down into:
  • stderr>&stdout
    • ->
  • stderr>value_of_stdout
    • ->
  • stdout = stderr + stdout

💡 You can also use anonymous pipes for these kinds of purposes

Suppress error messages / stderr

Make sure to see above section about how descriptors work with redirection, but a handy thing to remember is:

# Pretend command 'foobar' is likely to throw errors that we want to *completely* suppress
foobar 2>/dev/null

This sends error output to /dev/null, which basically discards all input.

If we want to see errors, but in the stdout stream (e.g., to not let them trigger exception handling), use 2>&1 instead.

Stderr Redirection - Additional reading

Using variables

Clearing / Deleting / Unsetting Variables

To remove an environment variable entirely, use unset {VARIABLE_NAME}.

Note that this only takes affect in the current shell or new subprocesses, similar to variable assignment.

Setting Variables

For setting variables, it depends on the variable type.

Local variables (same process):

VARIABLE_NAME=VARIABLE_VALUE

To set one variable equal to another, just use the same syntax you would normally use for variable access. For example:

NEW_VARIABLE_B=$VARIABLE_A

The above only works for setting values in the current process - the values won't be passed through to forked processes:

MY_VAR="Hello World"

echo $MY_VAR # "Hello World"
sh -c 'echo $MY_VAR' # EMPTY / UNSET, because not the same process

If you want to set an environment variable, which is persisted through session, even into forked processes, use export:

export VARIABLE_NAME=VARIABLE_VALUE

Escape spaces by enclosing VARIABLE_VALUE in double quotes

Also, see Environment Variables subsection

Storing / Setting / Using Numerical Values

The most important thing to note about using numerical values / numbers in bash is that arithmetic work must be done in double parenthesis - either (()) or $(()) - or with let "{arithmetic_work}".

Some example syntax:

# Even when declaring new variables, move them into the parenthesis
# my_var=(()) does not work

counter=0
((counter++)) # counter = 1
add=3
((counter+=add))
echo $counter # 4

((product=counter*20))
echo $product # 80

You can also use the double parenthesis to forcibly remove whitespace around numerical output. E.g., since wc sometimes includes spacing around output:

WC_OUTPUT=$(wc -l < ./myfile.txt)
LINE_COUNT=$(($WC_OUTPUT))

# Or, condensed
LINE_COUNT=$(($(wc -l < ./myfile.txt)))

Storing Commands and Evaluating

There are a couple different ways to store and evaluate commands.

For storing a command, you can approach it like storing any other string - just be extra careful about escaping.

For evaluating a stored command, you can pass the string to a shell via -c, instead of from standard input:

sh -c "$command"
bash -c "$command"

$SHELL -c "$command"

Finally, one can use the eval command:

eval "$STORED_COMMAND"

https://unix.stackexchange.com/questions/296838/whats-the-difference-between-eval-and-exec

Reading Variables

Prefix with $.

Example:

MYPATH="/home/joshua"
cd $MYPATH
echo "I'm in Joshua's folder!"

If you want to expand a variable inside a string, you can use ${} (curly braces) around the variable to expand it. Or $VAR_NAME also works.

For expansion inside strings, use double-quotes as single-quotes prevent expansion, unless you want to build up a command string that you want to expand later, by doing something like sh -c MY_COMMAND_STRING.

Default / Global Variables

In addition to using printenv to see all defined variables, you can also find lists of built-in variables that usually come default with either the system shell, bash, or globally:

Storing into a variable

How to store the result of a command into a variable:

There are three methods:

  • Command/process substitution (easy to understand)
    VARIABLE_NAME=$(command)
    • However, this doesn't always work with complex multi-step operations
  • read command (complicated) - works with redirection / piping
    echo "hello" | read VARIABLE_NAME
  • Backticks, surrounding the command to execute
    VARIABLE_NAME=`command`
    • The $() syntax should be preferred over this, due to backticks being not as well supported across different shells

If you want to combine command substitution with inline variable exports / exposing variables to the command inside of the substitution, just make sure the variables are passed into the inside of the command parenthesis. E.g.:

# This works
greeting=$(name="joshua" node ./say_hi.js)

# NO!, this will not work
name="Joshua" greeting=$(node ./say_hi.js)

Environment Variables

List all env values

printenv

Set an environment variable - current process and sub-processes

To set an environment variable, use the export keyword:

export VARIABLE_NAME=VARIABLE_VALUE

Set an environment variable - permanently

In order for an environment variable to be persisted across sessions and processes, it needs to get saved and exported from a config file.

This is often done by manually editing /etc/environment:

    1. Launch editor: sudo -H gedit /etc/environment
    1. Append key-value pair: VAR_NAME="VAR_VAL"
    1. Save

The difference between setting a variable with export vs without, is similar to the difference in MS Windows, for using setx vs just set -> export persists the value.

Using Environment Variable Files (like .env)

There is already the /etc/environment file for auto-loading values, but often it is useful to also use per-directory / per-project environment variable files, like a private .env file.

As long as your .env file follows proper shell escaping / quoting, you can use this to load all the values:

set -o allexport # Export all variables. Alias is `set -a`
source .env
set +o allexport # Disable exporting all. Alias is `set +a`

What if you just want to capture a single variable, but not export it / expose it to the rest of the system and/or subshells?

Well, sourcing without exporting should do the trick (same caveat about shell quoting applies with this too though). A quick example:

#!/usr/bin/env bash
set -e

unset ALPHA
unset BRAVO

cat > .test.env << "EOF"
ALPHA="A"
BRAVO="B"
EOF

echo "=== Using allexport ==="
set -o allexport
source .test.env
set +o allexport
$SHELL -s << "EOF"
	echo "Subshell - ALPHA: $ALPHA"
	echo "Subshell - BRAVO: $BRAVO"
EOF
echo "Outside - ALPHA: $ALPHA"
echo "Outside - BRAVO: $BRAVO"

unset ALPHA
unset BRAVO
echo "=== Not using allexport ==="


source .test.env
$SHELL -s << "EOF"
	echo "Subshell - ALPHA: $ALPHA"
	echo "Subshell - BRAVO: $BRAVO"
EOF
echo "Outside - ALPHA: $ALPHA"
echo "Outside - BRAVO: $BRAVO"

# Cleanup
rm .test.env
unset ALPHA
unset BRAVO

Output from above script:

=== Using allexport ===
Subshell - ALPHA: A
Subshell - BRAVO: B
Outside - ALPHA: A
Outside - BRAVO: B
=== Not using allexport ===
Subshell - ALPHA:
Subshell - BRAVO:
Outside - ALPHA: A
Outside - BRAVO: B

Global path

Inspecting the path:

echo $PATH

# Line separated
echo $PATH | tr ':' '\n'

# For permanent paths
cat /etc/paths

Modifying the path:

  • You can modify directly, with something like export PATH=$PATH:my/new/path
  • You can edit /etc/paths or add files to the path directory
  • In general, modifying the path can depend on OS and shell; here is a guide

Figuring Out What Mutated the PATH Variable

If you are in an environment you don't have complete control over (e.g. inside someone else's container / image / AMI), it can be difficult to track down what is setting certain values within the PATH.

AFAIK, there isn't really a debugger to figure out wht is mutating the PATH; all you can really do is go down a checklist of the normal culprits:

  • /etc/environment
  • /~.
  • ~/.bashrc
  • ~/.profile
  • /etc/profile
  • /etc/profile.d/*

Triggering / running a SH file

  • Make sure it is "runnable" - that it has the execute permission
    • chmod +x /scriptfolder/scriptfile.sh
  • Then call it:
    • /scriptfolder/scriptfile.sh

If you are having issues running from Windows...

  • MAKE SURE LINE ENDINGS ARE \n and not \r\n

Also, make sure you specify directory, as in ./myscript.sh, not myscript.sh, even if you are currently in the same directory as script.

Nested Sourcing

Note that if a bash script includes source {PATH}, then that shell script should itself be executed with source instead of running it directly.

For example, if you create a wrapper script, ./activate_python.sh, to activate a python environment that looks like:

source ~/projects/my-flask-app/venv/bin/activate

Then, to enter the virtual environment, you would need to run source ./activate_python.sh, not ./activate_python.sh directly.

Keeping file open after execution

Add this to very end of a file, to "trap" the shell:

exec $SHELL

Note: this will interfere with scripts handing back control to other scripts; ie resuming flow between multiple scripts.

Inlining and Executing Other Languages

If you want to mix shell and other scripting languages in the same executable file, one way to do so is to use heredoc strings to inline non-shell code and pass it to the right interpreter. For example, you could inline a NodeJS snippet like so:

echo "This is a line in a shell script"

# NodeJS
node << "EOF"
const { userInfo } = require('os');
console.log('User Info:', userInfo());
EOF

You don't have to quote the leading delimiter ("EOF" above), but if you don't, you will run into issues if your string contains $ (bash will try to parse as variables).

However, this isn't the cleanest approach as it doesn't work well with syntax-highlighting, linting, or type-checking tools, but is a nice tool to have for adding small code snippets without having to clutter your project or repository with tons of extra files.

If you are interested in cross-language script runners and/or task runners, you might want to look at things like just or maid. Also feel free to check out my section on task runners and script automation tools.

Strings

Checking for Special Characters, Line Breaks, Etc.

Some options to see special characters, inspect, etc.:

  • Pipe to cat -v or cat -e
    • $ = LF / \n
    • ^M = CR / \r
    • ^M$ = CRLF / \r\n
  • Pipe to xxd
  • Pipe to od

Escaping Special Characters

  • You can use single quotes for literal interpretation (prevent parsing of special characters within)
    • This also works for preventing expansion of variables. E.g., these are very different:
      • sh -c "echo $MY_VAR" <- value of MY_VAR will be expanded immediately
      • sh -c 'echo $MY_VAR' <- value of MY_VAR will be expanded at runtime of sub-shell
  • You can use a heredoc with a double quoted delimiter for literal interpretation (similar to single quotes)
    • E.g., start heredoc with << "EOF"
  • You can use a backlash (aka normal escape, \) for escaping within double quotes, etc.
    echo "To start, run \`npm run start\`"

Keep in mind that not all text-based commands handle special characters the same. For example, cat generally works better than echo for printing multi-line strings, etc.

Purposefully Printing Special characters (newline, etc)

By default, things like echo do not preserve line breaks stored as a literal \n

echo 'hello\ngoodbye'

  • Prints:
    • "hello\ngoodbye"

One way around this is with the $'' syntax (sometimes referred to as ANSI-C quoting). This must use single-quotes and not double:

echo $'hello\ngoodbye'

  • Prints:
    • "hello
      goodbye"

However, this isn't always supported, so a more portable option is printf:

printf 'hello\ngoodbye'

Finally, if you have a lot of line breaks, indents, etc., and a shell that supports them, it will probably be easier to use a heredoc instead of composing the string manually with escapes:

cat << EOF
hello
goodbye
EOF

For more details, see the heredoc section below.

Heredocs

Heredocs (Here Documents) are a useful way to escape a large block of text as well as build up complex interpolated strings in bash.

Let's start with a simple example:

cat << EOF
## To-Do List
- [ ] Laundry
- [ ] Order more coffee
- [ ] Empty the food waste
EOF

# Storing into a variable
todo_list=$(cat << EOF
cat << EOF
## To-Do List
- [ ] Laundry
- [ ] Order more coffee
- [ ] Empty the food waste
EOF
)

# If you want to use leading space on each line,
# and have it ignored:
cat <<- EOF
	Hello
	World
EOF

Getting more advanced, you can combine heredocs with redirection, variables, substitution / expansion in interesting ways:

# Redirect heredoc output to file
cat > system-info.txt << EOF
ENV:
$(printenv | sed -E "s/^/\t/g")

Directory: $PWD
OS: $(uname -a)
EOF

# This pattern also works
cat << EOF > file.txt
line 1
line 2
EOF

For embedding ad-hoc command output into the heredoc, you can see that $() was used in the above example. Technically the eval backticks would also work (``), but that is less advisable.

If your heredoc string contains special characters, like $, and you want to prevent special interpretation for the entire string, use double quotes around the leading delimiter, like: echo << "EOF". Otherwise use normal escaping methods (such as backslash, \).

It is common to use heredocs with command substitution. If doing so, please note that passing exposing variables to the command / sub-process (acting like a local export) needs to go inside the parenthesis. Like:

node_output=$(name="Josh" node <<- "EOF"
  console.log(`hello ${process.env.name}`);
EOF
)

Strings as Virtual Files

If you have a string (heredoc or plain string) that you want to pass to a command that is expecting a file rather than piped stdin, the first option you have is to use a temporary file and then delete it afterwards:

TEMP_PATH=$(mktemp)
cat > $TEMP_PATH << EOF
A
B
C
EOF
# Get line count
wc -l $TEMP_PATH
rm $TEMP_PATH

Another option (sometimes) is process substitution, together with redirection:

wc -l <(cat << EOF
A
B
C
EOF
)

Finally, you can refer to stdin as its file descriptor and pipe to it:

cat << EOF | wc -l /dev/fd/0
A
B
C
EOF

Joining Strings

You can put strings together (concatenate) in variable assignment, like this:

FOO="Test"
BAR=$FOO"ing"
echo $BAR

echoes:

testing

You can also use variables directly in quoted strings:

FOO="Hello"
BAR="World"
echo "$FOO $BAR"

# If the variable is immediately adjacent to text, you need to use braces
FOO="Test"
BAR="ing"
echo "${FOO}${BAR}"

You can also use the += operator to join strings during assignment, but not +.

Joining Strings with xargs

By default, xargs appends a space to arguments passed through. For example:

echo "Script" | xargs echo "Java"
# "Java Script"

If we want to disable that behavior, we can use the -I argument, which is really for substitution, but can be applied to this use-case:

echo "Script" | xargs -I {} echo "Java{}"
# Or...
echo "Script" | xargs -I % echo "Java%"
# Etc...

# Output: "JavaScript" - Success!

Converting to and from Base64 Encoding

Just use the base64 utility, which can be piped to, or will take a file input.

If you don't care about presentation, make sure to use --wrap=0 to disable column limit / wrapping

Skip Lines in Shell Output String

If you have skip lines in output (for example, to omit a summary row), you can use:

tail -n +{NUM_LINES_TO_SKIP + 1}
# Or, another way to think of it:
# tail -n +{LINE_TO_START_AT}

# Example: Skip very first line
printf 'First Line\nSecond Line\nThird Line' | tail -n +2
# Output is :
#     Second Line
#     Third Line

To skip the last line:

head -n -1

Trim Trailing Line Breaks

First, if the trailing line break is being introduced through the use of echo, and you control that code, you can modify the echo call to use -n to suppress the trailing line break directly:

echo "hello\n" | wc -l
# Outputs: 2
echo -n "hello\n" | wc -l
# Outputs: 1

For removing trailing line breaks in general, there are a bunch of ways to do this, but the first answer provided is probably the best - using command substitution, since it automatically removes trailing newlines:

echo -n "$(printf 'First Line\nSecond Line\nThird Line, plus three trailing newlines!\n\n\n')"

🚨 Warning: Command substitution works for removing trailing \n, but NOT \r

You could also use the head -n -1 trick to remove the very last line.

If you want to remove all line breaks, you can use tr for a easier to remember solution:

printf 'I have three trailing newlines!\n\n\n' | tr -d '\n'

If you are getting trailing line breaks with the echo command, you can also just use the -n flag to disable the default trailing line break. E.g. echo -n "hello"

Beware Carriage Return Line Endings

Carriage returns - as literal \r, and sometimes appearing as ^M - in your terminal kind of act like an old school typewriter - they move the pointer for output back to the beginning, which means that subsequent lines overwrite the previous. This behavior is dictated by the terminal and not bash.

Example to illustrate:

printf "alpha\rbravo\rcharlie"
# > charlie

You can use tr to swap these out for \n (tr '\r' '\n'). Or other tools like sed.

You might also see this behavior with programs that utilize a TTY. For example, docker exec -t will produce output that uses \r for line breaks.

You can pipe strings through cat -v if you want to check whether or not it contains carriage returns as ^M

Testing Strings

Aside from grep and sed, if you just want to check if a string is contained within another / contains another string, you can use asterisks around the search phrase use an equality check against the string you think could contain it:

if [[ $HAYSTACK == *"my needle"*]]; then
	:
fi

For checking for a non-empty string, you can use -z to match an empty string, but beware that -z considers blank space to NOT be empty, e.g.:

[[ -z " " ]] && echo "Empty" || echo "not empty"
# > "not empty"

For checking for a completely empty string, or one that trims to completely empty, you can use [[ -z "${param// }" ]] to remove the whitespace while checking for an empty value.

Generate a Random String

There is an excellent StackExchange thread on the topic, and most answers boil down to either using /dev/urandom as a source, or openssl, both of which have wide portability and ease of use.

  • /dev/urandom
    • From StackExchange
      # OWASP list - https://owasp.org/www-community/password-special-characters
      head /dev/urandom | tr -dc 'A-Za-z0-9!"#$%&'\''()*+,-./:;<=>?@[\]^_`{|}~' | head -c {length}
    • I've had some issues with the above command on Windows (with ported utils)...
  • OpenSSL
    • For close to length:
      • openssl rand -base64 {length}
    • For exactly length:
      • openssl rand -base64 {length} | head -c {length}

Security, Encryption, Hashing

Quick Hash Generation

If you need to quickly generate a hash, checksum, etc. - there are multiple utilities you can use.

  • sha256sum
    • Example: echo -n test | sha256sum
    • Example: cat msg.txt | sha256sum
  • openssl dgst (-sha256, -sha512, etc.)
    • Example: echo -n test | openssl dgst -r -sha256
    • Example: openssl dgst -r -sha256 msg.txt

I'm using -r with openssl dgst to get its output to match the common standard that things like sha256sum, Window's CertUtil, and other generators use.

🚨 WARNING: Be really wary of how easy it is to accidentally add or include newlines and/or extra spacing in the content you are trying to generate a hash from. If you accidentally add one in the shell that is not present in the content, the hashes won't match.

There are even more solutions offered here.

How to generate keys and certs

  • SSH Public / Private Pairs: Using ssh-keygen (available on most Unix based OS's, included Android)
    • You can run it with no arguments and it will prompt for file location to save to
      • ssh-keygen
    • Or, pass arguments, like -t for algorithm type, and -f for filename, -C for comment
      • ssh-keygen -t rsa -C "your_email@example.com"
    • You can specify the passphrase via CLI, with -N (e.g. -N "" for an empty passphrase)
      • Necessary if you want to generate non-interactively
    • Technically, the private/public keys generated by this can also be used with OpenSSL signing utils
    • Example: ssh-keygen -t rsa -C "Josh's SSH Key" -f ./josh_rsa_key will generate:
      • ./josh_rsa_key (the private key)
      • ./josh_rsa_key.pub (the public key)
  • The standard convention for filenames is:
    • Public key: {name}_{alg}.pub
      • Example: id_rsa.pub
    • Private key:
      • No extension: {name}_{alg}
        • Example: id_rsa
      • Other extensions
        • .key
        • .pem
        • .private
        • Doesn't really matter; just don't use ppk, since that it very specific to putty
  • Standard Public / Private Pairs: OpenSSL

💡 The final bit of text in a public key, which is plain text that looks like username@users-pc actually does nothing, and is just a comment; you can set it on creation with -C "my comment" if you like, or edit afterwards. But again, no impact.

Public and Private Key Pair Comments

When you include a comment in a key-pair, e.g. with -C "my comment", it actually gets embedded in both the private and public key.

In the public key, it should be directly readable as plain text:

cat ./my_key.pub
# ssh-rsa {base64} {comment}
# ssh-rsa AAAAB3Nz..k= my comment

For a private key, you can use a utility to parse it, such as ssh-keygen, and you should be able to see it in the stdout of the CLI utility:

ssh-keygen -lf ./my_key
# {crypto alg} {hash alg}:{hash} {comment}
# 3072 SHA256:hz... my comment

How to use Public and Private Key Signing

Generally, the most widely used tool for asymmetric keys with Bash (or even cross-OS, with Windows support) is the OpenSSL CLI utilities.

Here are some resources on how to use OpenSSL for public/private key signing:

Create new files, or update existing file timestamps

  • Touch without any flags will create a file if it does not exist, and if it does already exist, update timestamp of "last accessed" to now
    • touch "myfolder/myfile.txt"
  • If you want touch to only touch existing and never create new files, use -c
    • touch -c "myfile.txt"
  • Specifically update last accessed stamp of file
    • touch -a "myfolder/myfile.txt"
  • specifically update "Last MODIFIED" stamp of file
    • touch -m "myfolder/myfile.txt"
  • You can also use wildcard matching
    • touch -m *.txt
  • and combine flags
    • touch -m -a *.txt

To create a file, as well as any parent folders it might need that don't yet exist, the usual approach is to use mkdir and then touch. But, as a shortcut, you can use this handy command (credit)

install -D /dev/null ${YOUR_FILEPATH}

# On MacOS, use:
ginstall -D /dev/null ${YOUR_FILEPATH}
# Or,
install -d /dev/null ${YOUR_FILEPATH}

Verify Files

You can verify that a file exists with test -e {filepath}. Handy guide here.

If you want to check the line endings of a file, for example to detect the accidental usage of CRLF instead of LF, you can use file MY_FILE. Or cat -e MY_FILE ($ = LF / \n and ^M$ = CRLF / \r\n).

Getting Meta Information About a File

# General file info
stat my-file.txt
ls -lh my-file.txt

# For identifying image files. Part of imagemagick
# @see https://linux.die.net/man/1/identify
identify my-file.jpg
identify my-file.pdf

# Can (try) to detect and display file type
# @see https://linux.die.net/man/1/file
file my-file.txt
# Get mime
file -I my-file.txt

Hex View

To view the hex of a file, you can use xxd {FILE_PATH}

Deleting

  • Delete everything in a directory you are CURRENTLY in:
    • Best:
      • find -mindepth 1 -delete
    • UNSAFE!!!
      • rm -rf *
    • Better, since it prompts first:
      • rm -ri *
  • Delete everything in a different directory (slightly safer than above)
    • rm -rf path/to/folder
  • Delete based on pattern
    • find . -name '*.js' -delete

If you want to run rm and ignore errors, you can use the standard bash trick of appending || true.

E.g. rm -rf path/does/not/exist || true

File Management

LS and File Listing

📄 LS - SS64

LS Cheatsheet

How to... Cmd
Show all files ls -a
Show filesizes (human readable) ls -lh
Show filesize (MB) ls --block-size=MB
Show details (long) ls -l (or, more commonly, ls -al)
Sort by last modified ls -t

⚡ -> Nice combo: ls -alht --color (or, easier to remember ls -halt --color). All files, with details, human readable filesizes, color-coded, and sorted by last modified.

ls - show all files, including hidden files and directories (like .git)

ls -a

List directory sizes

du -sh *

Both Windows and *nix support the tree command.

If, for some reason, you can't use that command, some users on StackOverflow have posted solutions that emulate tree using find + sed.

List all files recursively with full paths

To list all files, recursively, with one full path per line, use find . instead of list.

Listing Files by Glob Patterns - Like Gitignore

If you have a single glob pattern you want to use to list / find files by, usually you can use the built-in globbing support of your shell to expand, or the globbing support of the individual CLI tool.

However, things get trickier when you have a lot of patterns you want to use and/or have some patterns override others - in the style of .gitignore.

If your directory is a git repository, and you only need to take a .gitignore file into consideration, you can use the built-in check-ignore function from git, with some flags that invert its filtering to only show files that are not ignored (credit):

git ls-files --others --exclude-standard

If you need to glob pattern files other than .gitignore into consideration and/or your directory is not a git repository, the easiest solution AFAIK is to use a tool like ripgrep. With ripgrep, there is an important, slightly unintuitive, trick to remember though: use --no-ignore to first disable the default ignore handling, then add your ignore glob files. If you don't do this, a local .gitignore file can override all other lists (inversion won't work, for example).

rg --files --no-ignore --ignore-file .gitignore --ignore-file .my-custom-ignore-list

Combining ripgrep with archive creators (like zip or tar) is a nice way to create distribution files:

rm -f dist.zip
rg --files --no-ignore \
	--ignore-file .gitignore \
	--ignore-file dist.ignorelist | zip -@ dist.zip
zipinfo dist.zip
echo "✅ Dist ZIP packed"

Using File Archives, Compression

How do I...

  • Extract / unpack a tarball
  • Extract / unpack a .zip archive

Tar - Using the tar command

⚠️ The tar commands accepts options with and without a leading hyphen. E.g., you will often see people use tar xvf instead of tar -xvf

These are equivalent commands and the hyphen-less option is offered for historical reasons / backwards compatibility.

Tar - Using tar for packing files

Useful tar flags

Function Flag Example
Change directory before packing --directory $DIR (or -C $DIR) tar --directory ./to_pack -cf "packed.tar" .
Gzip -z tar -czvf "packed.tar.gz" .

Tar - Piping from stdin / tar-ing stdout

If you want to tar a stdout stream, you can pipe to tar as stdin.

The important thing is to end your command with a trailing hyphen -, instead of giving a directory to pack, to tell tar to read from stdin.

thing_producing_output | tar ${tar_args} -

You can also use normal tar arguments while doing so.

thing_producing_output | tar -czvf output.tar.gz -

You can also continue to pipe the output to other commands, by using - for the filename:

thing_producing_output | tar -czvf - - | another_command

Pigz

# Decompressing a single file
unpigz -c my_archive.tar.gz | tar --directory "$target_dir" -xvf -

Finding Files

Keywords: Finding files, searching for files, find files

Desire Program Command
Find a file, by name (or name pattern), anywhere on disk find find / -name "my_filename"
find / -name "*my_pattern*"
find / -path "**/*my_pattern*"
Find a file, by name, anywhere on disk (faster than find) locate locate {filename-or-path}
Find a file, by name / name pattern, anywhere, using rg rg rg --files --no-ignore --glob {glob_pattern} {search_path_root}

- You can use / for search_path_root to search across entire disk
Find a file, by contents, using grep or rg (ripgrep) grep, rg grep -r {pattern} {dir}
rg {pattern} {opt_dir}
rg -e {pattern_a} -e {pattern_b} {opt_dir}
Find a directory, based on a glob / pattern find find {search_path} -type d -path {pattern}

Example:
find ./.venv -type d -path '*/site-packages'

💡 You can use the 2>/dev/null redirection trick to suppress errors with the above commands (which might show up when trying to search across files with permission issues)

Executing Commands Across Files

Using find with -exec:

find . -name "*.txt" -exec stat {} \;

# If your command is complicated,or involves pipes,
# you can use `sh` as another layer to execute the command
find . -name "*.jpg" -exec sh -c "stat {} | tail -n 1" \;

Using built-in shell glob expansion

Make sure to not quote the glob pattern (it won't expand if you do that)

echo "Markdown content, in this directory"
cat ./*.md

# or with a loop
for file in ./md/snippets/*.md; do
	echo "$file"
done

If you want to be 100% certain that you are only executing your command against X number of files, you can use something like this:

file_count=$(ls PATTERN | wc -l)

Executing Commands Across Directories

Using find


# Find + exec

find ./images/* -type d -exec sh -c 'echo "Folder size = $(du -sh $1)"' sh {} \;

# The -exec method is preferred over this for robustness
for dir in $(find ./images/* -type d); do
	echo "Folder size = $(du -sh "$dir")"
done

Using built-in shell glob expansion

Globbing for sub-directories is really only useful / easy for top-level directories.

# Only top-level directories
for dir in ./images/*/; do
	echo "Folder size = $(du -sh "$dir")"
done

Count Matching Files

For a faster file count operation, you can use find's printf option to replace all filenames with dots, and then use wc character count to count them. Like this:

find {PATH} {FILTER} -type f -printf '.' | wc -c

Here is an example, to count all the .md Markdown files in a /docs directory:

find ./docs -iname "*.md" -type f -printf '.' | wc -c

Credit

Syncing Files

Rsync

rsync

  • Example: rsync -az -P . joshua@domain.com:/home/joshua/my_dir
    • -a = archive mode (recursive, copy symlinks, times, etc - keep as close to original as possible)
    • -z = compress (faster transfer)
    • -P = --partial + --progress (show progress, keep partially transferred files for faster future syncs)
  • Use --filter=':- .globfile' to use a globlist as exclusions
    • Use --filter=':- .gitignore' to reuse gitignore file for exclusions
    • You can use --filter multiple times and they will be combined
  • Use --exclude to exclude single files, directories, or globs, also allowed multiple times
  • Use --include to override filter
  • Use --dry-run to preview

To sync a single file, here is a sample command:

rsync -vz --progress ./test.txt joshua@domain.com:/home/joshua/my_dir

If you need to customize the SSH options used with rsync, to pass in a specific key file for example, you can use -e to specify the exact command to use:

rsync {other_options} -e "ssh -i $MY_KEY_FILE" ./test.txt joshua@domain.com:/home/joshua/my_dir

To include only certain files from a directory / override an ignore glob, you can do something like adding the directory first, then including the subfiles, then excluding the directory contents via glob. Like this:

rsync {other_options} \
  --include="subdir/" \
  --include="subdir/file_a.txt" \
  --include="subdir/file_b.txt" \
  --exclude="subdir/*"

Show progress bar / auto-update / keep console updated:

Great SO Q&A

Find executable paths

If you are looking for the bash equivalent of Window's "where" command, to find how a executable is exposed, try using which. E.g. which node.

You can use the ln command (ss64) to create symbolic links.

# Works for both files and directories
ln -s {realTargetPath} {symbolicFilePath}

# If you need to update an existing symlink, you need to use "force"
ln -sf {realTargetPath} {symbolicFilePath}

In general, it is best / easiest to always use absolute paths for the targets.

If you want to delete the symlink, but not the original, just make sure you operate on the symlink path, e.g. rm {symbolicFilePath}.

You can use ls -la to list all files, including symlinks.

If you just want to see resolved symlinks, you can use grep - ls -la | grep "\->"

If you want to inspect a specific symlink, use readlink -f {SYMLINK}

On macOS, install coreutils, and use greadlink -f instead

If you are running a shell inside of a symlinked directory, $PWD will reflect the virtual symlink path, not the true resolved path. However, even from within a symlinked directory, you can continue to use readlink -f against sub-paths, to get the true path.

readlink -f $PWD

Networking

💡 An excellent package to get for working with network stuff is net-tools. It is also what contains netstat, which is great for watching active connections / ports.

cURL

  • Good cheatsheets
  • Show headers only
    • curl -I http://example.com
  • Search for something
    • You can't just pipe directly to grep or sed, because curl sends progress info stderr, so use --silent flag:
      • curl --silent https://joshuatz.com | sed -E -n 's/.*<title>(.+)<\/title>.*/\1/p'
        • Prints: Joshua Tzucker&#039;s Site
  • Download a file
    • Specify filename: curl -o {New_Filename_Or_Path} {URL}
    • Reuse online filename: curl -O {URL_with_filename}
  • Follow redirects: -L
    • Useful for downloading DropBox links (or else you get an empty file):
      • curl -L -o myfile.txt https://www.dropbox.com/s/....?dl=1
  • Don't show progress / quiet: --silent
    • Note: If you need to suppress all stdout (including the response), use --silent -o /dev/null
  • With POST data: -X POST YOUR_URL -d '{}'
    • Binary data: -X POST YOUR_URL --data-binary "@FILE_PATH"
  • With auth: -u USERNAME:PASSWORD

cURL - Exit Codes and Checking Status Codes

A common issue with cURL is that you are mixing multiple things that can be used to represent success - the actual output of the request (via stdout), the HTTP status code, and the exit code of running the command.

For checking the status code, you will have to suppress the printing of the response and tell cURL just to print the status code:

curl_status_code=$(curl --silent --write-out "%{http_code}" -o /dev/null "YOUR_URL")

Now, you could manually check this status code, but it should also be noted that you can use --fail with curl if you just want to know if a request failed or not:

# Executed in sub-shell to let script continue
(curl --silent --fail -o /dev/null "YOUR_URL")
if [[ $? != 0 ]]; then
	echo "Request failed!"
fi

Networking - Checking DNS Records and Domain Info

Overview post of a few different methods

  • dig
    • Default (A records + NS): dig {DOMAIN}
    • All: dig {DOMAIN} ANY
    • Specific type: dig {DOMAIN} {RECORD_TYPE}
      • dig joshuatz.com cname
  • host
    • Default (describes records): host {DOMAIN}
    • All: host -a {DOMAIN}
    • Specific type: host -t {RECORD_TYPE} {DOMAIN}
  • nslookup
    • (might not be available on all distros, but useful since this works on Windows too. However, nslookup also seems less reliable...)
    • Default (A record): nslookup {DOMAIN}
    • All: nslookup -d {DOMAIN}
      • Equivalent to nslookup -t ANY {DOMAIN}
    • Specific type: nslookup -querytype {RECORD_TYPE} {DOMAIN}
      • OR:nslookup -t {RECORD_TYPE} {DOMAIN}

Networking - How do I...

  • Resolve DNS hostname to IP
    • getent hosts HOST_NAME | awk '{ print $1 }'
    • Credit goes to this S/O
  • Download a file and save it locally with bash?
    • You can use wget or cURL (S/O):
      • wget -O {New_Filename_Or_Path} {URL}
      • curl -o {New_Filename_Or_Path} {URL}
    • If you want to just use the name of the file as-is, you can drop -O with wget
    • If you want to get the contents of the file, and pipe it somewhere, you can use standard piping / redirection. E.g., curl ifconfig.me > my_ip_address.txt
  • Transfer files across devices using bash?
    • You can transfer over SSH, using the scp command
      • Example: scp my-file.txt joshua@1.1.1.1:/home/joshua
      • Example: scp -i ssh_pkey my-file.txt joshua@1.1.1.1:/home/joshua
      • Example: scp -rp ./my-dir joshua@1.1.1.1:/home/joshua/my-dir
    • Another option good option is rsync, especially for frequent syncs of data where some has stayed the same (it optimizes for syncing only what has changed).
    • Alternatively, you could use cURL to upload your file, to a service like transfer.sh, and then cURL again on your other device to download the same file via the generated link
  • Find the process that is using a port and kill it?
    • Find PID:
      • Linux: netstat -ltnp | grep -w ':80'
      • macOS: sudo lsof -i -P | grep LISTEN | grep :$PORT (credit) (you often don't need sudo with this)
        • Remove the grep LISTEN filter if you just want to check for general traffic
    • Kill by PID: kill ${PID}
      • With force: kill -SIGKILL ${PID}

Handy Commands for Exploring a New OS

Command What?
printenv Prints out variables in the current environment

For a more readable list, use:
printenv | sed -E "s/^/\t/g"
echo $PATH | tr ':' '\n' Print the system path, with entries separated by line.
compgen -c | sort List available commands, sorted
ps aux List running processes
uname -a Display system OS info (kernel version, etc.)
lsb_release -a Display distribution info (release version, etc.)
cat /etc/os-release Display distribution info (release version, etc.). Useful on custom kernel distros like linuxkit, where uname -a doesn't tell you much.
apt list --installed List installed packages
crontab -l or less /etc/crontab View crontab entries
lshw View summary of installed hardware
df Show how much disk space is free / left.
dpkg --print-architecture or uname -p Show CPU architecture type (amd64 vs arm64 vs i836, etc.)
dpkg -l | less List installed packages
service --status-all List services

x86_64 == amd64

Built-in Text Editors

If you are inside of a new environment, and are not sure which text editors are installed / available, here is a list you can try:

  • nano
  • vim
  • vi
  • emacs

Get Public IP Address

Easy mode: curl http://icanhazip.com

Lots of different options out there.

Echoing out Dates

The main command to be familiar with is the date utility.

You can use date +FMT_STRING to specify the format to apply to the output.

Common Formats:

Command What Sample
date Prints current date/time in %c format Sat Nov 28 03:56:03 PST 2020
date -u +"%Y-%m-%dT%H:%M:%SZ" Prints current date, a full ISO-8601 string 2020-11-28T12:11:27Z
date +%s Seconds since epoch 1606565661

Get Date as MS Since Epoch

If you don't actually need the full precision of milliseconds, but need the format / length, you can use: date +%s000

If you really need as-close-to-real MS timestamps, you can use any of these (some might not work on all systems):

  • date +%s%3N
  • date +%s%N | cut -b1-13
  • echo $(($(date +%s%N)/1000000))

Above solutions were gathered from this S/O question, which has a bunch of great responses.

You could also always use node -p "Date.now()" if you have NodeJS installed.


User Management

Adding or Modifying Users

Use adduser {username} to add a new (non-root) user.

If you want to create a new user, but also grant them sudo / admin privileges, you can either:

  • Add to sudo group while creating
    • useradd --groups sudo {username}
      • OR:
    • adduser {username] --ingroup sudo
  • Create user first, then add to sudo group
    1. Create user:
      • adduser {username]
        • OR:
      • useradd {username}
    2. usermod -a -G sudo {username}

💡 Note: The above commands could also be used for adding to groups other than sudo - just swap out sudo with the group you want to use

🚨 Warning: Creating a new user will not automatically grant them SSH access. See SSH Notes for details.

The adduser {USER} {GROUP} syntax only works if the user already exists.

Add User to Group

usermod -a -G groupname username

(also see above section(s))

Listing User Groups

You can use groups to list all groups you are a part of, or use groups {USER} for a specific user.

For listing all groups on a system, you might be able to use less /etc/group or getent group (see more here).

Deleting a User

Use userdel {USERNAME} to delete a user. Optionally, pass the -r flag to also delete their home directory.


Process / Task Management

  • Find process details by PID
    • ps -p {PID}
  • Find process by command (not process name)?
    • Get all: ps aux | grep "my_search_string"
      • Note: aux is not preceded by - because these are BSD style options
    • Slightly nicer, if you are just looking for PID and uptime: ps -eo pid,etime,command | grep "my_search_string"
    • For both of the above methods, you probably want to append | grep --invert "grep" to the very end to filter out the process generated by the search itself

Subshells and Forking

If you want to run a command in in a subshell, the easiest way is to wrap it in parenthesis. For example:

echo $PWD # /joshua

# Execute in subshell
(cd subdir && echo $PWD) # /joshua/subdir

# Even though previous command moved to a subdirectory, we are still in parent
# because it was executed in subshell
echo $PWD # /joshua

You can also use things like sh -c, or $SHELL -c to be more explicit about spawning a shell as a different process. For example:

MY_VAR=1
echo "MY_VAR = $MY_VAR"
# > MY_VAR = 1
$SHELL -c 'echo "MY_VAR = $MY_VAR"'
# > MY_VAR = (not set)

For a longer string, it can be ergonomic to use a heredoc, and use -s to read from stdin:

$SHELL -s << EOF
echo "line 1"
echo "line 2"
EOF

Killing a Process After a Delay

On most *nix systems, you can use the timeout command to run a process for a given maximum amount of time:

timeout TIMEOUT_SECONDS LONG_TASK

# If the process might fail, but you want to always continue
(timeout TIMEOUT_SECONDS LONG_TASK; exit 0)

If timeout is not available, here is an example of an alternative approach:

# Spawn long running task as child process, and store PID of process as variable
(LONG_TASK) & pid=$!
# in the background, sleep for preset duration before killing task via stored PID
(sleep TIMEOUT_SECONDS && kill -9 $pid) &

📄 Relevant StackOverflow: How to kill a child process after a given timeout in Bash?


Watching and Live Output

The most efficient form of "watching" involves tracking the actual inputs to a command and then only re-running when those inputs have changed. For file-based commands, this can be done with a utility that hooks into the OS-level file tracking system - e.g. the inotifywait command for wrapping inotify usage.

If you don't need peak efficiency though, a quick solution is to use the watch command to re-run a given command on a preset interval. E.g.:

# Print the date and time every second
watch -n 1 "date"

Note that the watch command doesn't process colored output by default, so use -c to tell it to process ANSI color sequences. In addition, you might have to modify the actual command you are running to help it work with a non-interactive environment:

# Show colored diff output every second
watch -n 1 -c "git -c color.ui=always diff --staged"

# Show just diff stats
watch -n 2 -c "git diff --staged --stat | tail -n 1"

Session, Window, and Screen Management

As an alternative to Screen, or tmux solutions, you might want to check out a task execution queuing and management system, like pueue

Screen

If you need to manage multiple sessions, which you can leave and resume at any time, screen is the usual go-to program.

Screen Docs: linux.die.net, SS64

Command What it Does
screen -S {REF} Create a named session
screen -ls List active sessions
screen -d -r {REF} Detach, and then attach to existing session
screen -r {REF} Attach to existing session
screen -XS {REF} quit Kill a different session
echo $STY View current session name
CTRL + a, :, sessionname View current session name
CTRL + a, d Detach screen from terminal (i.e., leave without stopping what is running)
CTRL + a, k Kill the current screen / session (with confirmation)

tmux

Moved to separate cheatsheet


Misc

  • How to keep a shell open after a command / script?
    • Use exec $SHELL.
    • E.g.: bash -c 'cd /tmp/my-dir; exec $SHELL'
  • What do you call the section that shows up before you enter text?
    • The prompt. On both bash and zsh it can be referenced with $PS1

Troubleshooting

  • Input has stopped appearing as you type it
    • This can happen for a number of reasons. The quick fix is usually to use reset or stty sane.
  • Echo keeps evaluating a variable, when I meant to just print it with variable substitution
    • Check for backticks, or other special unescaped characters that could introduce an eval situation
  • You keep getting the No such file or directory error, but only when assigning to a variable
    • Make sure you don't accidentally have a leading $, like $MY_VAR=MY_PATH
  • Stale autocomplete (aliases, functions, etc.)
    • Try sourcing your main shell config (e.g. source ~/.zshrc)
    • Try these suggestions
      • On ZSH, I have found unfunction _myFunc && compinit to work the best
  • Using source exits your shell or makes the prompt disappear (PS1)
    • Check if set -e was used (or -e in the shebang). With source (or .) this will cause the parent shell to exit if the script causes an error. To get around this, you can always execute the script (or a function from it) with || true appended.
    • For the most robust approach, wrap in a subshell: (source ./my-script.sh || true)
  • Running a command exits your shell or makes the prompt disappear
    • See above for if this happens when using source or .
      • If the goal is to run a shell script without exiting, just call it directly instead of sourcing; that will let you use set -e without exiting the terminal
        • E.g. ./my_script.sh instead of . ./my_script.sh or source ./my_script.sh
    • For functions, make sure you haven't accidentally used exit when you meant to use return
    • Make sure you didn't accidentally use PROMPT as a variable name! (since $PROMPT is what the shell uses for prompt display)
  • Get command not found, but executable is definitely in PATH
    • Make sure you have put the directory the executable resides in your PATH, not the executable itself

Debugging Shell Scripts

  • The easiest way (in terms of upfront cost / setup) to start troubleshooting shell is to utilize print debugging, in the form of using set -x. The downside is that this produces a lot of output.
    • You can always wrap just the code of interest with this, as opposed to the entire script file
  • For debugging specific values, you can use declare -p to print the name and type of a variable.
    • For example, declare -p my_array should print out typeset -a my_array=( )

Profiling Shell Scripts and Commands

Timing Operations in Shells / Using Timers

The easiest way to profile something in your shell is to use the built-in time command: time {SOME_COMMAND}.

E.g.:

time sleep 4
# > sleep 4  0.00s user 0.00s system 0% cpu 4.010 total

Another way to time how long something takes in a shell is to use the built-in SECONDS variable. This is automatically set to 0 on every new shell and auto-incremented every second, but you can also manually reset to 0 at any point, for an easier timing calculation. E.g.:

sleep 2
SECONDS=0
sleep 4
# Note: on macOS, use `gdate` instead of `date`
printf "Last sleep operation took %s" "$(date +%T -d "1/1 + $SECONDS sec")"

Markdown Source Last Updated:
Wed Nov 27 2024 18:51:42 GMT+0000 (Coordinated Universal Time)
Markdown Source Created:
Mon Aug 19 2019 17:06:24 GMT+0000 (Coordinated Universal Time)
© 2024 Joshua Tzucker, Built with Gatsby
Feedback