Joshua's Docs - Bash / Shell - Cheatsheet
Light

Resources

What & Link Type
SS64 Bash Reference Docs
The Bash Hackers Wiki Docs / Wiki / Quick Ref
Wooledge / GreyCat: Bash Reference Sheet, Full Bash Guide Cheatsheet
DevHints: Bash Cheatsheet Cheatsheet
LinuxIntro.org: Shell Scripting Tutorial Cheatsheet / Quick Reference
ExplainShell.com
- breaks down any given command and explains what it does)
Interactive Tool
TLDR Sh Simplified man pages, open-source, which you can read online or access directly in the terminal
CompCiv: Bash Variables and Command Substitution Guide to variables, interpolation with strings, and related features
man7: Linux man Pages Docs

Formatting and Linting

Checkout shellcheck for static analysis and mvdan/sh for parsing and formatting.

Configuration

Special Shell / Bash Files:

File Conventional Usage
~/.bash_profile (or ~/.profile) Store environment variables, to be loaded once and persisted, modify $PATH, etc. Also typically contains the code to load .bashrc

Important: Is only read & executed for interactive login shells, meaning forks / child shells will not reload it. Thus, use the file for things you want to load once (like environment variables), but not things to load every time (like aliases and functions).
~/.bashrc Store aliases, functions, and pretty much anything custom OR load those customizations from external files via source. This file is itself executed via source, automatically by bash.

~/.bash_aliases Store aliases, to be loaded into every new shell
~/.bash_prompt For customizing the shell itself (appearance, etc.)

This page from Baeldung explains some of the differences between various bash startup files in greater detail than above.

If you use zsh instead of bash / sh, most of these files are not actually read by default. If you are using Oh My Zsh, you can auto-load any file ending in .zsh by placing it (or symlinking) within the $ZSH_CUSTOM directory. If you are not using it, or just want something more custom, to have zsh read them by default, add lines to your ~/.zshrc file that load them. For example, to load .bash_aliases, you could add:

[ -f ./.bash_aliases ] && source ./.bash_aliases

Or, for a slightly cleaner approach, store the path as a variable first, so it is not repeated.

Dotfiles

See my cheatsheet: Dotfiles

Aliases

To create an alias, use the alias command:

alias alias_name="actual_command_that_executes"

For example, if we have some favorite flags to use with ls:

alias list="ls -halt"

If you need an alias that accepts arguments and then passes them to the middle of another command, you are better off writing a function. There are some ways to accomplish this with just aliases, but they are less straightforward.

Functions

# Simplest form
my_function() {
	# body
}

# You can use the function keyword, but don't have to and this is less portable
function my_function() {
	# body
}

To execute a function, call the function name, without parenthesis after the file has been read with source. E.g.:

source custom_functions.sh
my_function

Processing User Input

Confirmation Prompts

There are multiple ways to do shell confirmation prompts (e.g. Continue? Yes / No), but here is a somewhat standard approach:

while true; do
	read -p "Continue? [Yy]es, [Nn]o?" yn
	case $yn in
		[Yy]* ) break;;
		[Nn]* ) exit;;
	esac
done
echo "You made it through!"

You can trade break and exit for the various actions you want to perform, but keep in mind you will need break at some point to continue on with the script and exit the loop.

The purpose of using the while true loop is that it covers the edge-case where the user types something that matches neither of the two cases - it traps them until they do.

Processing Flags, Options, and Arguments

Whether you are receiving arguments to a shell script itself, or passing to a function, there are a few common tools for parsing arguments and flags within bash. The popular solution is the getopts command. The common pattern for usage looks something like this:

# getopts OPTSPEC VARIABLE [arg]
# sometimes `OPTSPEC` is called `OPTSTRING`

while getopts ":dv:f" opt; do
	case "${opt}" in
		d) DEBUG=true ;;
		v) VERBOSE=true ;;
		f) FILE="${OPTARG}" ;;
		\?)
			echo "Invalid option: -${OPTARG}" >&2
			;;
	esac
done

The leading : in the OPTSTRING of the above example suppresses the built-in error reporting for invalid flags. Leave it out if you don't want to suppress these.

🤔 getopts vs getopt? Contentious topic, but getopts is built-in, while getopt is not. Unfortunately getops does not support long argument names (easily), but getopt does. This post summarizes some more differences.

If you are passing arguments and/or flags to a function within a shell script, make sure you call the function like myFunction "$@".

getopts: Parsing Long Options without using getopt

As previously mentioned, although it is nice that getopts is "built-in", it doesn't support parsing long options (e.g. --file instead of -f). However, there are some workarounds that don't require getopt.

Example of manual parsing code - using while, $#, and shift
VERBOSE=false
while [[ ! $# -eq 0 ]]
do
	case "$1" in
		--verbose|-v)
			VERBOSE=false
			;;
		# You could leave off this case if you want to allow extra options
		*)
			echo "invalid option ${1}"
			;;
	esac
	shift
done

Kudos to Jon Almeida's blog post and this StackExchange answer for pointing in the right direction. This is also similar code to that produced procedurally by Argbash.

Example of manual parsing code - using for and do
for var in "$@"
do
	echo "var = ${var}"
done

The below code is very similar but exploits the fact that arg gets evaluated like arg in "$@":

for arg
do printf "arg = ${arg}\n"
done

Current directory:

echo $PWD

Including the "Hash-Bang" / "shebang"

#!/bin/bash
  • ^ Should go at the top of sh files.

Sometimes you will see flags included as part of the shebang. For example, you can use -e (errexit) to have the script exit immediately if a command fails:

#!/bin/bash -e

For portability, this is the preferred format:

#!/usr/bin/env bash
set -e

Commenting

# My comment

Logic / flow

📄 Wooledge Guide

Test

Before using advanced branching logic, you should know that the shell has a built-in test check - basically coerces the output of an expression to a boolean that can be used in logic flows. Simply encase the expression/condition in brackets:

[ check-this-condition ]

or double brackets, as the newer version (always recommended)

There are lots of different conditionals you can test against.

For example:

  • Is variable set (has value)?
    • [[ -n $MY_VAR ]]
  • Does file exist?
    • [[ -e file.txt ]]
  • Does directory exist?
    • [[ -d ./dir/ ]]

To invert / negate the test expression, use ! (exclamation mark).

For negation, where you place the ! mark is important. If you place it outside the brackets, like ! [[ condition(s) ]], then it negates the entire clause inside the brackets after it is evaluated. If you place it inside, it negates individual sections of the clause, before evaluating. This is the same as how logic generally works with parenthesis in most programming languages.

If / Else

Great guides

Case

📄 case guide from Bash-Hackers

📄 case guide from Wooledge

Important: For default usage (e.g .;; endings, not ;;&), case stops matching patterns as soon as one is successful

Ternary Operator / Conditional Expression

In many languages, there is support for something called the ternary operator, or conditional operator. It usually looks like this (this is not bash, just an example):

// JavaScript:

// Do something based on ternary
userIsAdmin ? print('Welcome Admin!') : print('Welcome!')

// Assign based on ternary
const userType = userIsAdmin ? 'Admin' : 'User';

In bash, this can be accomplished by the syntax of TEST && echo IF_TRUE || IF_FALSE. Like this:

$user_is_admin && echo "Welcome Admin!" || echo "Welcome!"

Note: This only works as long as the thing you want to do if the conditional is true always exits with exit code 0 (success)

Helpful S/O's for understanding the above: 1, 2

For assignment, just wrap the entire execution in a command substitution parenthesis block:

user_type=$($user_is_admin && echo "Admin" || echo "User")

# You can use more advanced conditional checks

LOG_OUT=$([[ -n $LOG_PATH ]] && echo $LOG_PATH || echo "/dev/stdout")
echo "Starting program..." >> $LOG_OUT

Short Circuit Assignment (also for defaults)

In certain languages, you can use short circuit assignments to assign the value of a variable, and fallback (aka default) if it is undefined. Something like this:

const name = userInput || "New User"

In bash, there are two main ways to accomplish this kind of task. The first is with shell parameter expansion:

: ${name:="default"}
name=${user_input:-"New User"}

# Or, if we want to re-use same variable
: ${user_input:="New User"}

The second way is to use a conditional expression, although this is not as concise:

name=$([[ -n $user_input ]] && echo $user_input || echo "New User")

Double Pipe vs Double Ampersand, and Other Flow Operators

Quick reference:

  • && = Only execute right side if left side succeeds
    • Examples:
      • false && echo "this text will NOT show"
      • true && echo "this text will show"
  • || = Only execute right side if left side FAILS (non-zero exit code)
    • Essentially the inverse of &&
    • Examples:
      • false || echo "this text will show"
      • bad_command || echo "this text will show"
      • true || echo "this text will NOT show"
  • & = asynchronously runs both commands on either side, in parallel, regardless of success of either, in detached (forked) processes
    • Warning: This can be a hackish way to do things
    • Definitely do not use this if the second command is dependent on the output of the first
    • Examples:
      • slow_command_to_execute & echo "this will appear before the left side is done!"
      • true & echo "this text will show"
      • false & echo "this text will also show"
    • If you need to kill all processes on exit (e.g. SIGINT / CTRL + C), you can use:
  • ; = Execute both side, sequentially, regardless of success of either
    • Examples:
      • true; echo "this text will show"
      • bad_command; echo "this text still shows"
    • Since this doesn't work in many Windows environments, an easy workaround to get the same behavior is to replace CMD_ONE; CMD_TWO with (CMD_ONE || true) && CMD_TWO.
      • This exhibits the same behavior, since CMD_TWO will also synchronously execute after CMD_ONE, regardless of its success
      • Great for NPM scripts
      • Nice writeup
  • | = Not for logic flow, used for redirection

Arrays and Loops

https://opensource.com/article/18/5/you-dont-know-bash-intro-bash-arrays

Waiting Until Something Is Ready

A common DevOps task is delaying running a step in a workflow into a process / file / endpoint is ready. For instance, if you are developing server code, you might launch a live instance of it in a CI/CD workflow and need to delay your test command until the server is ready to receive requests.

Depending on what you are waiting for, there might be more efficient ways (for example, using inotify for files), but the one-size-fits-all approach is to use a loop that does not exit / continue until your condition is reached, usually combined with a small delay (like sleep 1).

In many cases, you might want to add in a timeout / maximum iteration condition. Especially if this is for a CI/CD environment that can rack up large bills if left running continuously 😉. The below examples include this, doing so with a while loop approach:

MAX_LOOPS=10
LOOPS=0
while ! [[ -e my_file.txt ]] && (($LOOPS < $MAX_LOOPS)); do
	LOOPS=$(( LOOPS + 1 ))
	>&2 echo "File does not exist. Waiting... loop #${LOOPS}"
	sleep 1
done

# At this point, either test was successful, or MAX_LOOPS was reached
# If this is important for next step, re-test for correct exit code
[[ -e my_file.txt ]]

and also with a until loop (but with inverted test logic)

MAX_LOOPS=10
LOOPS=0
until [[ -e my_file.txt ]] || (($LOOPS >= $MAX_LOOPS)); do
	LOOPS=$(( LOOPS + 1 ))
	>&2 echo "File does not exist. Waiting... loop #${LOOPS}"
	sleep 1
done

# At this point, either test was successful, or MAX_LOOPS was reached
# If this is important for next step, re-test for correct exit code
[[ -e my_file.txt ]]

Grep

  • In general, if you are a RegEx power user, you will probably find sed much preferable. Or awk.

    • grep can actually be a bit of a pain when trying to do things like use capture groups (1, 2)
  • Cheatsheets:

  • (Common) Flags:

    Flag Description
    -E Extended regex
    -o Only output the matching part of the line
    -p Treat as perl style regular exp
    -i Ignore case
    -e Pass explicit patterns, multiple allowed
    -n Show line numbers for matches
    -A {num} | -B {num} Show {num} lines before / after match
    -F Assume input is fixed strings, meaning don't treat as regex pattern (useful if you are looking for an exact match, and your search string contains RegEx chars like .)

Grep - Print Only Matching

The -o flag will do this.

On some systems, it also adds line breaks, even with a single result. For removing the line break for single result outputs:

grep -o '{pattern}' | tr -d '\n'

# Example
echo hello | grep -o '.ll.' | tr -d '\n'
# prints 'ello'

sed

  • Cheatsheets

  • Common flags

    Flag Description
    -n Silent, suppress printing of pattern space
    -r
    (or -E on some systems)
    Use extended regexp - I always prefer
  • Syntax

    • print output
      • echo $'hello world\nLine Two\nGoodbye' | sed -E -n '/Line.+/p'
        • Prints "Line Two"
    • substitute
      • echo $'hello world\nLine Two\nGoodbye' | sed -E 's/Line.+/my substitution/'
        • Prints:
          • hello world
            my substitution
            Goodbye
      • Example: Replace space with newline
        • echo 'item_a item_b' | sed -E 's/ /\n/g'
        • Prints:
          • item_a
            item_b
      • Example: Replace null character with new line
        • sed -E 's/\x0/\n/g'
    • Print only a specific capture group
      • This is actually a little complicated. Basically, you have to substitute the entire input with the back-reference of the capture.
        • sed -E -n 's/.*(My_Search).*/\1/p
      • In action:
        • echo $'hello world\nLine Two\nGoodbye' | sed -E -n 's/.*^Line (.+)$.*/\1/p'
          • Prints:
            • "Two"

Warning: sed on your system might have limitations - for example, be warned that if you can't use the Perl (-p) mode, you will need to use [0-9] instead of /d, for digits.

Capturing and Executing Output

If you simply want to "capture" the value output by a script and store it as a variable, you can use substitution. See "Storing into a variable".

If you want to execute the output of a command as a new command / script, you can use the (dangerous) eval command, plus substitution: eval $( MY_COMMAND ).

Here is a full example:

(echo echo \"in test.txt\") > test.txt
eval $( cat test.txt )
# "in test.txt"

Capturing Input Arguments in Bash Scripts

To capture arguments (aka positional parameters) within a script, you can use $@, and $# for the number of arguments (these are a form of Special Parameters). Make sure to double-quote when using - e.g.:

# say_hi.sh
YOUR_NAME="$1"
echo "Hello "$YOUR_NAME", your name has"$( echo -n $YOUR_NAME | wc -m ) "characters in it"

# Run
./say_hi.sh Joshua
# > Hello Joshua, your name has 6 characters in it

You can use this to pass input arguments to a completely different program / process, which makes it handy for intermediate scripting.

Guide: Bash Hackers Wiki - Handling Positional Parameters

Piping and redirection

  • Piping VS Redirection
    • Simple answer:
      • Piping: Pass output to another command, program, etc.
      • Redirect: Pass output to file or stream
  • Pipe
    • |
    • echo 'hello world' | grep -o 'hello'
      • Prints hello
  • Redirection
    • >
    • echo "hello" > output.txt
    • For appends, use double - >>

Watching Output While Redirecting to File

If you want the stdout of a program to still show up in the terminal, but also want to send it to a file or pipe it elsewhere, tee is the tool you want to use.

Examples:

echo "foo" | tee ./output.log

# The above will omit stderr, so you have to use the stderr redirection trick if you want to capture both
stat file_not_exist 2>&1 | tee ./output.log

# You can use it with other commands, like `pbcopy` to copy to clipboard
echo "foo" | tee >(pbcopy)

Problems with piping

Piping, in general, is taking the stdout of one process to the stdin of another. If the process you are trying to pipe to is expecting arguments and doesn't care about stdin, or ignores it, piping won't work as you want it to.

The best solution for this is usually to use xargs, which reads stdin and converts the input into arguments which are passed to the command of your choice.

Or, you can use substitution to capture the result of the first part of the pipe and reuse it in the second.

See this S/O answer for details.

If the input you are passing contains special characters or spaces (such as spaces in a filename), take extra care to handle it. For example, see if the thing generating the input can escape it and null terminate the fields (e.g. git-diff --name-only -z), and then you can use the -0 or --null option with xargs to tell it to expect null terminated fields.

Example: git diff --name-only -z | xargs -0 git-date-extractor
Example: find . -name '*.gif -print0' | xargs -0 python extract_gif_frames_bulk.py
Example: ls | tr \\n \\0 | xargs -0 process_file.sh

# Git
git diff --name-only -z | xargs -0 git-date-extractor

# Piping multiple files to a single command
find . -name '*.gif' -print0 | xargs -0 python process_bulk.py
ls | tr \\n \\0 | xargs -0 process_file.sh

# Same as above, but running the command over each file, using `-n1` to specify max
# of one argument per command line
find . -name '*.gif' -print0 | xargs -0 -n1 python process_single.py

# For find, you can also just run -exec with find

Printing / Echoing Output

🚨 I would recommend getting familiar with special characters in Bash when working with outputting to shell; otherwise it can be easy to accidentally eval when you meant to just print something

Also see Escaping Special Characters.

Copying to Clipboard

There are a bunch of different options, and it largely depends on what you have available on your OS.

This S/O response serves as a good list.

On macOS, it is usually pbcopy. On Linux, usually xclip -selection c.

??? - 2>&1

You see 2>&1 all over the place in bash scripts, because it is very useful. Essentially, it forces errors (stderr) to be piped to whatever value stdout is set to.✳

✳ = I'm greatly simplifying here. It's more complicated than that.

I'm not sure if this pattern has an official name, although it is very popular.

This has a few really handy and common uses:

  1. See both the output and the errors in the console at the same time
    • Often errors are routed to stderr and not shown in the console.
  2. Suppress errors
    • Since this forces errors to stdout, this has the side effect of suppressing them from their normal destination
      • However, they are still going to show up in stdout obviously. If you really want to suppress them entirely, use 2> /dev/null, which essentially sends them to oblivion
  3. Send both output and errors to file
    • If you redirect to a file before using 2>&1, then both outputs gets sent to the file.
      • ls file-does-not-exist.txt > output.txt 2>&1
        • output.txt will now contain "ls: cannot access 'file-does-not-exist.txt': No such file or directory"
  4. Send both output and errors through *pipe
    • cat this_file_doesnt_exist 2>&1 | grep "No such file" -c

On a more technical level, Unix has descriptors that are kind of like IDs. 2 is the descriptor/id for stderr, and 1 is the id for stdout. In the context of redirection, using & + ID (&{descriptorId}) means copy the descriptor given by the ID. This is important for several reasons - one of which is that 2>1 could be interpreted as "send output of 2 to a file named 1", whereas 2>&1 ensures that it is interpreted as "send output of 2 to descriptor with ID=1".

So... kinda...

  • 2>&1
    • can be broken down into:
  • stderr>&stdout
    • ->
  • stderr>value_of_stdout
    • ->
  • stdout = stderr + stdout

💡 You can also use anonymous pipes for these kinds of purposes

Suppress errors

Make sure to see above section about how descriptors work with redirection, but a handy thing to remember is:

# Pretend command 'foobar' is likely to throw errors that we want to completely suppress
foobar 2>/dev/null

This sends error output to /dev/null, which basically discards all input.

Stderr Redirection - Additional reading

Checking for Errors

You can use $? to get the last exit status. Here are some examples.

Reminder: Anything other than 0 is an error ("non-zero exit code").

Using variables

Setting Variables

For setting variables, it depends on the variable type.

Normal variables:

VARIABLE_NAME=VARIABLE_VALUE

For this syntax, keep in mind that the key-pair can act like an environment variable if a command is immediately executed within the same process.

For example, MY_VAR=ABC printenv will show that MY_VAR has value ABC, but MY_VAR=ABC && printenv will not - it will show that MY_VAR is unset as an environment variable.

Environment variables (persisted through session):

export VARIABLE_NAME=VARIABLE_VALUE

Escape spaces by enclosing VARIABLE_VALUE in double quotes

Also, see Environment Variables subsection

Reading Variables

Prefix with $.

Example:

MYPATH="/home/joshua"
cd $MYPATH
echo "I'm in Joshua's folder!"

If you want to expand a variable inside a string, you can also use {} (curly braces) around the variable to expand it.

Default / Global Variables

In addition to using printenv to see all defined variables, you can also find lists of variables that usually come default with either the system shell, bash, or globally:

Storing into a variable

How to store the result of a command into a variable:

  • There are two methods:
    • Command/process substitution (easy to understand)
      VARIABLE_NAME=$(command)
      • However, this doesn't always work with complex multi-step operations
    • read command (complicated) - works with redirection / piping
      echo "hello" | read VARIABLE_NAME

Environment Variables

List all env values

printenv

Set an environment variable - current process and sub-processes

export VARIABLE_NAME=VARIABLE_VALUE

Set an environment variable - permanently

In order for an environment variable to be persisted across sessions and processes, it needs to get saved and exported from a config file.

This is often done by manually editing /etc/environment:

    1. Launch editor: sudo -H gedit /etc/environment
    1. Append key-value pair: VAR_NAME="VAR_VAL"
    1. Save

The difference between setting a variable with export vs without, is similar to the difference in MS Windows, for using setx vs just set -> export persists the value.

Global path

Inspecting the path:

echo $PATH

# Line separated
echo $PATH | tr ':' '\n'

# For permanent paths
cat /etc/paths

Modifying the path:

  • You can modify directly, with something like export PATH=$PATH:my/new/path
  • You can edit /etc/paths or add files to the path directory
  • In general, modifying the path can depend on OS and shell; here is a guide

Triggering / running a SH file

  • Make sure it is "runnable" - that it has the execute permission
    • chmod +x /scriptfolder/scriptfile.sh
  • Then call it:
    • /scriptfolder/scriptfile.sh

If you are having issues running from Windows...

  • MAKE SURE LINE ENDINGS ARE \n and not \r\n

Also, make sure you specify directory, as in ./myscript.sh, not myscript.sh, even if you are currently in the same directory as script.

Keeping file open after execution

Add this to very end of file:

exec $SHELL

Note: this will interfere with scripts handing back control to other scripts; ie resuming flow between multiple scripts.

Inlining and Executing Other Languages

If you want to mix shell and other scripting languages in the same executable file, one way to do so is to use heredoc strings to inline non-shell code and pass it to the right interpreter. For example, you could inline a NodeJS snippet like so:

echo "This is a line in a shell script"

# NodeJS
node << "EOF"
const { userInfo } = require('os');
console.log('User Info:', userInfo());
EOF

You don't have to quote the leading delimiter ("EOF" above), but if you don't, you will run into issues if your string contains $ (bash will try to parse as variables).

However, this isn't the cleanest approach as it doesn't work well with syntax-highlighting, linting, or type-checking tools, but is a nice tool to have for adding small code snippets without having to clutter your project or repository with tons of extra files.

If you are interested in cross-language script runners and/or task runners, you might want to look at things like just or maid. Also feel free to check out my section on task runners and script automation tools.

Strings

Escaping Special Characters

  • You can use single quotes for literal interpretation (prevent parsing of special characters within)
  • You can use a heredoc with a double quoted delimiter for literal interpretation (similar to single quotes)
    • E.g., start heredoc with << "EOF"
  • You can use a backlash (aka normal escape, \) for escaping within double quotes, etc.

Keep in mind that not all text-based commands handle special characters the same. For example, cat generally works better than echo for printing multi-line strings, etc.

Purposefully Printing Special characters (newline, etc)

You need to prefix with $ before string to use special characters.

Example:

  • echo 'hello\ngoodbye'
    • Prints:
      • "hello\ngoodbye"
  • echo $'hello\ngoodbye'
    • Prints:
      • "hello
        goodbye"

You can also use printf for linebreaks: printf '\n\n'

If you have a lot of line breaks, indents, etc., it will probably be easier to use a heredoc instead of composing the string manually with escapes:

cat << EOF
## To-Do List
- [ ] Laundry
- [ ] Order more coffee
- [ ] Empty the food waste
EOF

If your heredoc string contains special characters, like $, and you want to prevent special interpretation for the entire string, use double quotes around the leading delimiter, like: echo << "EOF". Otherwise use normal escaping methods (such as backslash, \).

Joining Strings

You can simply put strings together in variable assignment, like this:

FOO="Test"
BAR=$FOO"ing"
echo $BAR

echoes:

testing

You can also use variables directly in quoted strings:

FOO="Hello"
BAR="World"
echo "$FOO $BAR"

# If the variable is immediately adjacent to text, you need to use braces
FOO="Test"
BAR="ing"
echo "${FOO}${BAR}"

Joining Strings with xargs

By default, xargs appends a space to arguments passed through. For example:

echo "Script" | xargs echo "Java"
# "Java Script"

If we want to disable that behavior, we can use the -I argument, which is really for substitution, but can be applied to this use-case:

echo "Script" | xargs -I {} echo "Java{}"
# Or...
echo "Script" | xargs -I % echo "Java%"
# Etc...

# Output: "JavaScript" - Success!

Converting to and from Base64 Encoding

Just use the base64 utility, which can be piped to, or will take a file input.

If you don't care about presentation, make sure to use --wrap=0 to disable column limit / wrapping

Skip Lines in Shell Output String

If you have skip lines in output (for example, to omit a summary row), you can use:

tail -n +{NUM_LINES_TO_SKIP + 1}
# Or, another way to think of it:
# tail -n +{LINE_TO_START_AT}

# Example: Skip very first line
printf 'First Line\nSecond Line\nThird Line' | tail -n +2
# Output is :
#     Second Line
#     Third Line

To skip the last line:

head -n -1

Trim Trailing Line Breaks

There are a bunch of ways to do this, but the first answer provided is probably the best - using command substitution, since it automatically removes trailing newlines:

echo -n "$(printf 'First Line\nSecond Line\nThird Line, plus three trailing newlines!\n\n\n')"

You could also use the head -n -1 trick to remove the very last line.

If you want to remove all line breaks, you can use tr for a easier to remember solution:

printf 'I have three trailing newlines!\n\n\n' | tr -d '\n'

If you are getting trailing line breaks with the echo command, you can also just use the -n flag to disable the default trailing line break. E.g. echo -n "hello"

Generate a Random String

There is an excellent StackExchange thread on the topic, and most answers boil down to either using /dev/urandom as a source, or openssl, both of which have wide portability and ease of use.

  • /dev/urandom
    • From StackExchange
      # OWASP list - https://owasp.org/www-community/password-special-characters
      head /dev/urandom | tr -dc 'A-Za-z0-9!"#$%&'\''()*+,-./:;<=>?@[\]^_`{|}~' | head -c {length}
    • I've had some issues with the above command on Windows (with ported utils)...
  • OpenSSL
    • For close to length:
      • openssl rand -base64 {length}
    • For exactly length:
      • openssl rand -base64 {length} | head -c {length}

Security, Encryption, Hashing

Quick Hash Generation

If you need to quickly generate a hash, checksum, etc. - there are multiple utilities you can use.

  • sha256sum
    • Example: echo -n test | sha256sum
    • Example: cat msg.txt | sha256sum
  • openssl dgst (-sha256, -sha512, etc.)
    • Example: echo -n test | openssl dgst -r -sha256
    • Example: openssl dgst -r -sha256 msg.txt

I'm using -r with openssl dgst to get its output to match the common standard that things like sha256sum, Window's CertUtil, and other generators use.

🚨 WARNING: Be really wary of how easy it is to accidentally add or include newlines and/or extra spacing in the content you are trying to generate a hash from. If you accidentally add one in the shell that is not present in the content, the hashes won't match.

There are even more solutions offered here.

How to generate keys and certs

  • SSH Public / Private Pairs: Using ssh-keygen (available on most Unix based OS's, included Android)
    • You can run it with no arguments and it will prompt for file location to save to
      • ssh-keygen
    • Or, pass arguments, like -t for algorithm type, and -f for filename, -c for comment
      • ssh-keygen -t rsa -C "your_email@example.com"
    • Technically, the private/public keys generated by this can also be used with OpenSSL signing utils
  • The standard convention for filenames is:
    • Public key: {name}_{alg}.pub
      • Example: id_rsa.pub
    • Private key:
      • No extension: {name}_{alg}
        • Example: id_rsa
      • Other extensions
        • .key
        • .pem
        • .private
        • Doesn't really matter; just don't use ppk, since that it very specific to putty
  • Standard Public / Private Pairs: OpenSSL

💡 The final text in a public key, which is plain text that looks like username@users-pc actually does nothing, and is just a comment; you can set it on creation with -C mycomment if you like, or edit afterwards. But again, no impact.

How to use Public and Private Key Signing

Generally, the most widely used tool for asymmetric keys with Bash (or even cross-OS, with Windows support) is the OpenSSL CLI utilities.

Here are some resources on how to use OpenSSL for public/private key signing:

Create new files, or update existing file timestamps

  • Touch without any flags will create a file if it does not exist, and if it does already exist, update timestamp of "last accessed" to now
    • touch "myfolder/myfile.txt"
  • If you want touch to only touch existing and never create new files, use -c
    • touch -c "myfile.txt"
  • Specifically update last accessed stamp of file
    • touch -a "myfolder/myfile.txt"
  • specifically update "Last MODIFIED" stamp of file
    • touch -m "myfolder/myfile.txt"
  • You can also use wildcard matching
    • touch -m *.txt
  • and combine flags
    • touch -m -a *.txt

Verify Files

You can verify that a file exists with test -f {filepath}. Handy guide here.

If you want to check the line endings of a file, for example to detect the accidental usage of CRLF instead of LF, you can use file MY_FILE. Or cat -e MY_FILE ($ = LF / \n and ^M$ = CRLF / \r\n).

Getting Meta Information About a File

# General file info
stat my-file.txt
ls -lh my-file.txt

# For identifying image files. Part of imagemagick
# @see https://linux.die.net/man/1/identify
identify my-file.jpg
identify my-file.pdf

# Can (try) to detect and display file type
# @see https://linux.die.net/man/1/file
file my-file.txt
# Get mime
file -I my-file.txt

Hex View

To view the hex of a file, you can use xxd {FILE_PATH}

Deleting

  • Delete everything in a directory you are CURRENTLY in:
    • Best:
      • find -mindepth 1 -delete
    • UNSAFE!!!
      • rm -rf *
    • Better, since it prompts first:
      • rm -ri *
  • Delete everything in a different directory (slightly safer than above)
    • rm -rf path/to/folder
  • Delete based on pattern
    • find . -name '*.js' -delete

File Management

LS and File Listing

📄 LS - SS64

LS Cheatsheet

How to... Cmd
Show all files ls -a
Show filesizes (human readable) ls -lh
Show filesize (MB) ls --block-size=MB
Show details (long) ls -l (or, more commonly, ls -al)
Sort by last modified ls -t

⚡ -> Nice combo: ls -alht --color (or, easier to remember ls -halt --color). All files, with details, human readable filesizes, color-coded, and sorted by last modified.

ls - show all files, including hidden files and directories (like .git)

ls -a

List directory sizes

du -sh *

Both Windows and *nix support the tree command.

If, for some reason, you can't use that command, some users on StackOverflow have posted solutions that emulate tree using find + sed.

Executing Commands Across Files

Using find with -exec:

find . -name "*.txt" -exec stat {} \;

# If your command is complicated, or involves pipes, you can use `sh` as another layer to execute the command
find . -name "*.jpg" -exec sh -c "stat {} | tail -n 1" \;

Count Matching Files

For a faster file count operation, you can use find's printf option to replace all filenames with dots, and then use wc character count to count them. Like this:

find {PATH} {FILTER} -type f -printf '.' | wc -c

Here is an example, to count all the .md Markdown files in a /docs directory:

find ./docs -iname "*.md" -type f -printf '.' | wc -c

Credit

Syncing Files

Rsync

rsync

  • Example: rsync -az -P . joshua@domain.com:/home/joshua/my_dir
    • -a = archive mode (recursive, copy symlinks, times, etc - keep as close to original as possible)
    • -z = compress (faster transfer)
    • -P = --partial + --progress (show progress, keep partially transferred files for faster future syncs)
  • Use --filter=':- .gitignore' to reuse gitignore file for exclusions
  • You can use --filter multiple times and they will be combined
  • Use --exclude to exclude single files, directories, or globs, also allowed multiple times
  • Use --include to override filter
  • Use --dry-run to preview

To sync a single file, here is a sample command:

rsync -vz --progress ./test.txt joshua@domain.com:/home/joshua/my_dir

Show progress bar / auto-update / keep console updated:

Great SO Q&A

Find executable paths

If you are looking for the bash equivalent of Window's "where" command, to find how a binary is exposed, try using which. E.g. which node.

You can use the ln command (ss64) to create symbolic links.

# Works for both files and directories
ln -s {realTargetPath} {symbolicFilePath}

# If you need to update an existing symlink, you need to use "force"
ln -sf {realTargetPath} {symbolicFilePath}

In general, it is best / easiest to always use absolute paths for the targets.

If you want to delete the symlink, but not the original, just make sure you operate on the symlink path, e.g. rm {symbolicFilePath}.

You can use ls -la to list all files, including symlinks.

If you just want to see resolved symlinks, you can use grep - ls -la | grep "\->"

If you want to inspect a specific symlink, use readlink -f {SYMLINK}

On macOS, install coreutils, and use greadlink -f instead


Networking

💡 An excellent package to get for working with network stuff is net-tools. It is also what contains netstat, which is great for watching active connections / ports.

cURL

  • Good cheatsheets
  • Show headers only
    • curl -I http://example.com
  • Search for something
    • You can't just pipe directly to grep or sed, because curl sends progress info stderr, so use --silent flag:
      • curl --silent https://joshuatz.com | sed -E -n 's/.*<title>(.+)<\/title>.*/\1/p'
        • Prints: Joshua Tzucker&#039;s Site
  • Download a file
    • Specify filename: curl -o {New_Filename_Or_Path} {URL}
    • Reuse online filename: curl -O {URL_with_filename}
  • Follow redirects: -L
    • Useful for downloading DropBox links (or else you get an empty file):
      • curl -L -o myfile.txt https://www.dropbox.com/s/....?dl=1

Networking - Checking DNS Records and Domain Info

Overview post of a few different methods

  • dig
    • Default (A records + NS): dig {DOMAIN}
    • All: dig {DOMAIN} ANY
    • Specific type: dig {DOMAIN} {RECORD_TYPE}
      • dig joshuatz.com cname
  • host
    • Default (describes records): host {DOMAIN}
    • All: host -a {DOMAIN}
    • Specific type: host -t {RECORD_TYPE} {DOMAIN}
  • nslookup
    • (might not be available on all distros, but useful since this works on Windows too. However, nslookup also seems less reliable...)
    • Default (A record): nslookup {DOMAIN}
    • All: nslookup -d {DOMAIN}
      • Equivalent to nslookup -t ANY {DOMAIN}
    • Specific type: nslookup -querytype {RECORD_TYPE} {DOMAIN}
      • OR:nslookup -t {RECORD_TYPE} {DOMAIN}

Networking - How do I...

  • Resolve DNS hostname to IP
    • getent hosts HOST_NAME | awk '{ print $1 }'
    • Credit goes to this S/O
  • Download a file and save it locally with bash?
    • You can use wget or cURL (S/O):
      • wget -O {New_Filename_Or_Path} {URL}
      • curl -o {New_Filename_Or_Path} {URL}
    • If you want to just use the name of the file as-is, you can drop -O with wget
    • If you want to get the contents of the file, and pipe it somewhere, you can use standard piping / redirection. E.g., curl ifconfig.me > my_ip_address.txt
  • Transfer files across devices using bash?
    • You can transfer over SSH, using the scp command
      • Example: scp my-file.txt joshua@1.1.1.1:/home/joshua
      • Example: scp -i ssh_pkey my-file.txt joshua@1.1.1.1:/home/joshua
      • Example: scp -rp ./my-dir joshua@1.1.1.1:/home/joshua/my-dir
    • Another option good option is rsync, especially for frequent syncs of data where some has stayed the same (it optimizes for syncing only what has changed).
    • Alternatively, you could use cURL to upload your file, to a service like transfer.sh, and then cURL again on your other device to download the same file via the generated link
  • Find the process that is using a port and kill it?
    • Find PID:
      • Linux: netstat -ltnp | grep -w ':80'
      • macOS: sudo lsof -i -P | grep LISTEN | grep :$PORT (credit) (you often don't need sudo with this)
    • Kill by PID: kill ${PID}
      • With force: kill -SIGKILL ${PID}

Archives

How do I...


Handy Commands for Exploring a New OS

Command What?
uname -a Display system OS info (kernel version, etc.)
lsb_release -a Display distribution info (release version, etc.)
apt list --installed List installed packages
crontab -l or less /etc/crontab View crontab entries
lshw View summary of installed hardware
dpkg --print-architecture or uname -p Show CPU architecture type (amd64 vs arm64 vs i836, etc.)

x86_64 == amd64

Get Public IP Address

Easy mode: curl http://icanhazip.com

Lots of different options out there.

Echoing out Dates

The main command to be familiar with is the date utility.

You can use date +FMT_STRING to specify the format to apply to the output.

Common Formats:

Command What Sample
date Prints current date/time in %c format Sat Nov 28 03:56:03 PST 2020
date -u +"%Y-%m-%dT%H:%M:%SZ" Prints current date, a full ISO-8601 string 2020-11-28T12:11:27Z
date +%s Seconds since epoch 1606565661

Get Date as MS Since Epoch

If you don't actually need the full precision of milliseconds, but need the format / length, you can use: date +%s000

If you really need as-close-to-real MS timestamps, you can use any of these (some might not work on all systems):

  • date +%s%3N
  • date +%s%N | cut -b1-13
  • echo $(($(date +%s%N)/1000000))

Above solutions were gathered from this S/O question, which has a bunch of great responses.

You could also always use node -p "Date.now()" if you have NodeJS installed.


User Management

Adding or Modifying Users

Use adduser {username} to add a new (non-root) user.

If you want to create a new user, but also grant them sudo / admin privileges, you can either:

  • Add to sudo group while creating
    • useradd --groups sudo {username}
      • OR:
    • adduser {username] --ingroup sudo
  • Create user first, then add to sudo group
    1. Create user:
      • adduser {username]
        • OR:
      • useradd {username}
    2. usermod -a -G sudo {username}

💡 Note: The above commands could also be used for adding to groups other than sudo - just swap out sudo with the group you want to use

🚨 Warning: Creating a new user will not automatically grant them SSH access. See SSH Notes for details.

The adduser {USER} {GROUP} syntax only works if the user already exists.

Add User to Group

usermod -a -G groupname username

(also see above section(s))

Listing User Groups

You can use groups to list all groups you are a part of, or use groups {USER} for a specific user.

For listing all groups on a system, you might be able to use less /etc/group or getent group (see more here).

Deleting a User

Use userdel {USERNAME} to delete a user. Optionally, pass the -r flag to also delete their home directory.


Process / Task Management

  • Find process details by PID
    • ps -p {PID}
  • Find process by command (not process name)?
    • Get all: ps aux | grep "my_search_string"
      • Note: aux is not preceded by - because these are BSD style options
    • Slightly nicer, if you are just looking for PID and uptime: ps -eo pid,etime,command | grep "my_search_string"

Subshells and Forking

If you want to run a command in in a subshell, the easiest way is to wrap it in parenthesis. For example:

echo $PWD # /joshua

# Execute in subshell
(cd subdir && echo $PWD) # /joshua/subdir

# Even though previous command moved to a subdirectory, we are still in parent
# because it was executed in subshell
echo $PWD # /joshua

Session, Window, and Screen Management

As an alternative to Screen, or tmux solutions, you might want to check out a task execution queuing and management system, like pueue

Screen

If you need to manage multiple sessions, which you can leave and resume at any time, screen is the usual go-to program.

Screen Docs: linux.die.net, SS64

Command What it Does
screen -S {REF} Create a named session
screen -ls List active sessions
screen -d -r {REF} Detach, and then attach to existing session
screen -r {REF} Attach to existing session
screen -XS {REF} quit Kill a different session
echo $STY View current session name
CTRL + a, :, sessionname View current session name
CTRL + a, d Detach screen from terminal (i.e., leave without stopping what is running)
CTRL + a, k Kill the current screen / session (with confirmation)

tmux

The default --help command with tmux is not super helpful. I would recommend man tmux or this cheatsheet as alternatives

Here are some of my most-used commands

Command Description
tmux new -s {SESSION_NAME} Create a named session
tmux attach -t {SESSION_NAME} Attach to a named session
tmux ls List sessions
exit If run within a session, will exit (and kill!) the current session
tmux kill-session Kills the current session you are in, or use -t to kill specific ones.
tmux info Show all info
CTRL + b The main hotkey combo to enter the tmux command mode - i.e., what you need to press first, before a secondary hotkey.
CTRL + b, d Detach from the current session
CTRL + b, [ Enter copy mode. Use ESC to exit
CTRL + B, s Interactive session switcher, from inside an active session.

Faster than detaching, listing, and then re-attaching, plus you can see a preview before switching.
tmux kill-server Kill the entire tmux server
tmux start or tmux start-server Start the tmux server

If you run into issues starting tmux, with a server exited unexpectedly error, try deleting tmux-* folders from within /tmp first

tmux Configuration

tmux Config File - .tmux.conf

You can often configure tmux settings via the tmux command prompt (entered via CTRL + B, :), but for portability and easier management, it can preferable to store configuration settings in a dedicated file. Tmux supports this by default via a file at ~/.tmux.conf (but you can also explicitly pick a different file location and name if you want).

Here are some quick notes on the usage of this configuration file:

  • By default, tmux only reads & loads the config file once, on service startup. If you make changes and want to see them reflected in tmux, you need to do one of the following
    • Use the tmux source-file command
      • E.g., tmux source-file ~/.tmux.conf
      • You should run this outside of tmux itself (not inside a session). You can use this inside an existing tmux session and will take effect. However, it will only take effect for that specific session if you do that, as opposed to all.
      • You will also need to detach and re-attach to sessions to read the change, or also run the command inside each session
    • Completely restart the tmux service (different from restarting a session)
  • Comments are allowed, and use the standard shell # prefix

Enabling Scroll in tmux

You can use CTRL + B, then [ to enter copy mode, then scroll or key around (and copy text if you wish), using ESC to exit the mode.

You can also do CTRL + B, :, set -g mouse on to turn on mouse mode (or do so through your tmux config file). However, this tends to interfere with copy-and-pasting and generally is not a super smooth experience.


Troubleshooting

  • Input has stopped appearing as you type it
    • This can happen for a number of reasons. The quick fix is usually to use reset or stty sane.
  • Echo keeps evaling a variable, when I meant to just print it with variable substitution
    • Check for backticks, or other special unescaped characters that could introduce an eval situation
  • You keep getting the No such file or directory error, but only when assigning to a variable
    • Make sure you don't accidentally have a leading $, like $MY_VAR=MY_PATH
  • Stale autocomplete (aliases, functions, etc.)

Markdown Source Last Updated:
Sat Sep 10 2022 21:41:52 GMT+0000 (Coordinated Universal Time)
Markdown Source Created:
Mon Aug 19 2019 17:06:24 GMT+0000 (Coordinated Universal Time)
© 2022 Joshua Tzucker, Built with Gatsby
Feedback