Resources
What & Link | Type |
---|---|
SS64 Bash Reference | Docs |
The Bash Hackers Wiki | Docs / Wiki / Quick Ref |
Wooledge / GreyCat: Bash Reference Sheet | Cheatsheet |
DevHints: Bash Cheatsheet | Cheatsheet |
LinuxIntro.org: Shell Scripting Tutorial | Cheatsheet / Quick Reference |
ExplainShell.com - breaks down any given command and explains what it does) |
Interactive Tool |
TLDR Sh | Simplified man pages, open-source, which you can read online or access directly in the terminal |
CompCiv: Bash Variables and Command Substitution | Guide to variables, interpolation with strings, and related features |
LinuxIntro: Shell Scripting Tutorial | Cheatsheet |
man7: Linux man Pages | Docs |
Formatting and Linting
Checkout shellcheck
for static analysis and mvdan/sh
for parsing and formatting.
Configuration
Special Shell / Bash Files:
File | Conventional Usage |
---|---|
~/.bash_profile (or ~/.profile ) |
Store environment variables, to be loaded once and persisted, modify $PATH , etc. Also typically contains the code to load .bashrc Important: Is only read & executed for interactive login shells, meaning forks / child shells will not reload it. Thus, use the file for things you want to load once (like environment variables), but not things to load every time (like aliases and functions). |
~/.bashrc |
Store aliases, functions, and pretty much anything custom OR load those customizations from external files via source . This file is itself executed via source , automatically by bash. |
~/.bash_aliases |
Store aliases, to be loaded into every new shell |
~/.bash_prompt |
For customizing the shell itself (appearance, etc.) |
This page from Baeldung explains some of the differences between various bash startup files in greater detail than above.
If you use
zsh
instead ofbash
/sh
, most of these files are not actually read by default. If you are using Oh My Zsh, you can auto-load any file ending in.zsh
by placing it (or symlinking) within the$ZSH_CUSTOM
directory. If you are not using it, or just want something more custom, to havezsh
read them by default, add lines to your~/.zshrc
file that load them. For example, to load.bash_aliases
, you could add:
[ -f ./.bash_aliases ] && source ./.bash_aliases
Or, for a slightly cleaner approach, store the path as a variable first, so it is not repeated.
Dotfiles
See my cheatsheet: Dotfiles
Aliases
To create an alias, use the alias
command:
alias alias_name="actual_command_that_executes"
For example, if we have some favorite flags to use with ls
:
alias list="ls -halt"
If you need an alias that accepts arguments and then passes them to the middle of another command, you are better off writing a function. There are some ways to accomplish this with just aliases, but they are less straightforward.
Functions
# Simplest form
myFunction() {
# body
}
# You can use the function keyword, but don't have to and this is less portable
function myFunction() {
# body
}
Processing Flags, Options, and Arguments
Whether you are receiving arguments to a shell script itself, or passing to a function, there are a few common tools for parsing arguments and flags within bash. The popular solution is the getopts
command. The common pattern for usage looks something like this:
# getopts OPTSPEC VARIABLE [arg]
# sometimes `OPTSPEC` is called `OPTSTRING`
while getopts ":dv:f" opt; do
case "${opt}" in
d) DEBUG=true ;;
v) VERBOSE=true ;;
f) FILE="${OPTARG}" ;;
\?)
echo "Invalid option: -${OPTARG}" >&2
;;
esac
done
The leading
:
in the OPTSTRING of the above example suppresses the built-in error reporting for invalid flags. Leave it out if you don't want to suppress these.
🤔
getopts
vsgetopt
? Contentious topic, butgetopts
is built-in, whilegetopt
is not. Unfortunatelygetops
does not support long argument names (easily), butgetopt
does. This post summarizes some more differences.
If you are passing arguments and/or flags to a function within a shell script, make sure you call the function like myFunction "$@"
.
getopts: Parsing Long Options without using getopt
As previously mentioned, although it is nice that getopts
is "built-in", it doesn't support parsing long options (e.g. --file
instead of -f
). However, there are some workarounds that don't require getopt
.
- One way is to first transform any long options to short versions before passing them to
getopts
- Another way is to just roll your own parsing code.
- There is a program called argbash which can generate parsing code for you!
- Using the
getopts_long
bash script - Using the
getoptions
parser library - Finally, although not really recommended, there is kind of a hack that lets you parse long options directly in
getopts
by using a dash inOPTSPEC
(StackOverflow, BashFAQ)
Example of manual parsing code - using while
, $#
, and shift
VERBOSE=false
while [[ ! $# -eq 0 ]]
do
case "$1" in
--verbose|-v)
VERBOSE=false
;;
# You could leave off this case if you want to allow extra options
*)
echo "invalid option ${1}"
;;
esac
shift
done
Kudos to Jon Almeida's blog post and this StackExchange answer for pointing in the right direction. This is also similar code to that produced procedurally by Argbash.
Example of manual parsing code - using for
and do
for var in "$@"
do
echo "var = ${var}"
done
The below code is very similar but exploits the fact that arg
gets evaluated like arg in "$@"
:
for arg
do printf "arg = ${arg}\n"
done
Current directory:
echo $PWD
Including the "Hash-Bang" / "She-Bang"
#!/bin/bash
- ^ Should go at the top of sh files.
- this is why
- Tells system to run with BASH
Sometimes you will see flags included as part of the shebang. For example, you can use -e
(errexit
) to have the script exit immediately if a command fails:
#!/bin/bash -e
For portability, this is the preferred format:
#!/usr/bin/env bash
set -e
Commenting
# My comment
Logic / flow
Test
Before using advanced branching logic, you should know that the shell has a built-in test
check - basically coerces the output of an expression to a boolean that can be used in logic flows. Simply encase the expression/condition in brackets:
[ check-this-condition ]
or double brackets, as the newer version (always recommended)
There are lots of different conditionals you can test against.
For example:
- Is variable set (has value)?
[[ -n $MY_VAR ]]
- Does file exist?
[[ -e file.txt ]]
- Does directory exist?
[[ -d ./dir/ ]]
If / Else
Great guides
- Dev.to - Tasevski - Shellscripting: Conditional Execution
- CompCiv - Conditional Branching
- TLDP - Beginner's Bash Guide - Chapter 7 - Conditional Statements
Ternary Operator / Conditional Expression
In many languages, there is support for something called the ternary operator, or conditional operator. It usually looks like this (this is not bash, just an example):
// JavaScript:
// Do something based on ternary
userIsAdmin ? print('Welcome Admin!') : print('Welcome!')
// Assign based on ternary
const userType = userIsAdmin ? 'Admin' : 'User';
In bash, this can be accomplished by the syntax of TEST && echo IF_TRUE || IF_FALSE
. Like this:
$user_is_admin && echo "Welcome Admin!" || echo "Welcome!"
Note: This only works as long as the thing you want to do if the conditional is true always exits with exit code 0 (success)
For assignment, just wrap the entire execution in a command substitution parenthesis block:
user_type=$($user_is_admin && echo "Admin" || echo "User")
# You can use more advanced conditional checks
LOG_OUT=$([[ -n $LOG_PATH ]] && echo $LOG_PATH || echo "/dev/stdout")
echo "Starting program..." >> $LOG_OUT
Short Circuit Assignment (also for defaults)
In certain languages, you can use short circuit assignments to assign the value of a variable, and fallback (aka default) if it is undefined. Something like this:
const name = userInput || "New User"
In bash, there are two main ways to accomplish this kind of task. The first is with shell parameter expansion:
: ${name:="default"}
name=${user_input:-"New User"}
# Or, if we want to re-use same variable
: ${user_input:="New User"}
The second way is to use a conditional expression, although this is not as concise:
name=$([[ -n $user_input ]] && echo $user_input || echo "New User")
Double Pipe vs Double Ampersand, and Other Flow Operators
Quick reference:
&&
= Only execute right side if left side succeeds- Examples:
false && echo "this text will NOT show"
true && echo "this text will show"
- Examples:
||
= Only execute right side if left side FAILS (non-zero exit code)- Essentially the inverse of
&&
- Examples:
false || echo "this text will show"
bad_command || echo "this text will show"
true || echo "this text will NOT show"
- Essentially the inverse of
&
= asynchronously runs both commands on either side, in parallel, regardless of success of either, in detached (forked) processes- Warning: This can be a hackish way to do things
- Definitely do not use this if the second command is dependent on the output of the first
- Examples:
slow_command_to_execute & echo "this will appear before the left side is done!"
true & echo "this text will show"
false & echo "this text will also show"
- If you need to kill all processes on exit (e.g.
SIGINT
/CTRL + C
), you can use:command_a & command_b && kill $!
(credit)- Trap:
(trap 'kill 0' SIGINT; prog1 & prog2 & prog3)
(credit) - Gnu Parallel
;
= Execute both side, sequentially, regardless of success of either- Examples:
true; echo "this text will show"
bad_command; echo "this text still shows"
- Since this doesn't work in many Windows environments, an easy workaround to get the same behavior is to replace
CMD_ONE; CMD_TWO
with(CMD_ONE || true) && CMD_TWO
.- This exhibits the same behavior, since
CMD_TWO
will also synchronously execute afterCMD_ONE
, regardless of its success - Great for NPM scripts
- Nice writeup
- This exhibits the same behavior, since
- Examples:
|
= Not for logic flow, used for redirection
Grep
-
In general, if you are a RegEx power user, you will probably find
sed
much preferable. Orawk
. -
Cheatsheets:
-
(Common) Flags:
Flag Description -E
Extended regex -o
Only output the matching part of the line -p
Treat as perl style regular exp -i
Ignore case -e
Pass explicit patterns, multiple allowed -n
Show line numbers for matches -A
{num} |-B
{num}Show {num} lines before / after match -F
Assume input is fixed strings, meaning don't treat as regex pattern (useful if you are looking for an exact match, and your search string contains RegEx chars like .
)
Grep - Print Only Matching
The -o
flag will do this.
On some systems, it also adds line breaks, even with a single result. For removing the line break for single result outputs:
grep -o '{pattern}' | tr -d '\n'
# Example
echo hello | grep -o '.ll.' | tr -d '\n'
# prints 'ello'
sed
-
Cheatsheets
-
Common flags
Flag Description -n
Silent, suppress printing of pattern space -r
(or-E
on some systems)Use extended regexp - I always prefer -
Syntax
- print output
echo $'hello world\nLine Two\nGoodbye' | sed -E -n '/Line.+/p'
- Prints "Line Two"
- substitute
echo $'hello world\nLine Two\nGoodbye' | sed -E 's/Line.+/my substitution/'
- Prints:
- hello world
my substitution
Goodbye
- hello world
- Prints:
- Example: Replace space with newline
echo 'item_a item_b' | sed -E 's/ /\n/g'
- Prints:
- item_a
item_b
- item_a
- Example: Replace
null
character with new linesed -E 's/\x0/\n/g'
- Print only a specific capture group
- This is actually a little complicated. Basically, you have to substitute the entire input with the back-reference of the capture.
sed -E -n 's/.*(My_Search).*/\1/p
- In action:
echo $'hello world\nLine Two\nGoodbye' | sed -E -n 's/.*^Line (.+)$.*/\1/p'
- Prints:
- "Two"
- Prints:
- This is actually a little complicated. Basically, you have to substitute the entire input with the back-reference of the capture.
- print output
Warning:
sed
on your system might have limitations - for example, be warned that if you can't use thePerl
(-p
) mode, you will need to use[0-9]
instead of/d
, for digits.
Capturing and Executing Output
If you simply want to "capture" the value output by a script and store it as a variable, you can use substitution. See "Storing into a variable".
If you want to execute the output of a command as a new command / script, you can use the (dangerous) eval
command, plus substitution: eval $( MY_COMMAND )
.
Here is a full example:
(echo echo \"in test.txt\") > test.txt
eval $( cat test.txt )
# "in test.txt"
Capturing Input Arguments in Bash Scripts
To capture arguments (aka positional parameters) within a script, you can use $@
, and $#
for the number of arguments (these are a form of Special Parameters). Make sure to double-quote when using - e.g.:
# say_hi.sh
YOUR_NAME="$1"
echo "Hello "$YOUR_NAME", your name has"$( echo -n $YOUR_NAME | wc -m ) "characters in it"
# Run
./say_hi.sh Joshua
# > Hello Joshua, your name has 6 characters in it
You can use this to pass input arguments to a completely different program / process, which makes it handy for intermediate scripting.
Guide: Bash Hackers Wiki - Handling Positional Parameters
Piping and redirection
- Piping VS Redirection
- Simple answer:
- Piping: Pass output to another command, program, etc.
- Redirect: Pass output to file or stream
- Simple answer:
- Pipe
|
echo 'hello world' | grep -o 'hello'
- Prints
hello
- Prints
- Redirection
>
echo "hello" > output.txt
Problems with piping
Piping, in general, is taking the stdout of one process to the stdin of another. If the process you are trying to pipe to is expecting arguments and doesn't care about stdin, or ignores it, piping won't work as you want it to.
The best solution for this is usually to use xargs
, which reads stdin
and converts the input into arguments which are passed to the command of your choice.
Or, you can use substitution
to capture the result of the first part of the pipe and reuse it in the second.
See this S/O answer for details.
If the input you are passing contains special characters or spaces (such as spaces in a filename), take extra care to handle it. For example, see if the thing generating the input can escape it and null terminate the fields (e.g.
git-diff --name-only
-z
), and then you can use the-0
or--null
option withxargs
to tell it to expect null terminated fields.Example:
git diff --name-only -z | xargs -0 git-date-extractor
Example:find . -name '*.gif -print0' | xargs -0 python extract_gif_frames_bulk.py Example:
ls | tr \n \0 | xargs -0 process_file.sh`
# Git
git diff --name-only -z | xargs -0 git-date-extractor
# Piping multiple files to a single command
find . -name '*.gif' -print0 | xargs -0 python process_bulk.py
ls | tr \\n \\0 | xargs -0 process_file.sh
# Same as above, but running the command over each file, using `-n1` to specify max
# of one argument per command line
find . -name '*.gif' -print0 | xargs -0 -n1 python process_single.py
# For find, you can also just run -exec with find
Printing / Echoing Output
🚨 I would recommend getting familiar with special characters in Bash when working with outputting to shell; otherwise it can be easy to accidentally eval when you meant to just print something
Copying to Clipboard
There are a bunch of different options, and it largely depends on what you have available on your OS.
This S/O response serves as a good list.
On macOS, it is usually
pbcopy
. On Linux, usuallyxclip -selection c
.
??? - 2>&1
You see 2>&1
all over the place in bash scripts, because it is very useful. Essentially, it forces errors (stderr
) to be piped to whatever value stdout
is set to.
This has a few really handy and common uses:
- See both the output and the errors in the console at the same time
- Often errors are routed to
stderr
and not shown in the console.
- Often errors are routed to
- Suppress errors
- Since this forces errors to
stdout
, this has the side effect of suppressing them from their normal destination- However, they are still going to show up in
stdout
obviously. If you really want to suppress them entirely, use2> /dev/null
, which essentially sends them to oblivion
- However, they are still going to show up in
- Since this forces errors to
- Send both output and errors to file
- If you redirect to a file before using
2>&1
, then both outputs gets sent to the file.ls file-does-not-exist.txt > output.txt 2>&1
output.txt
will now contain "ls: cannot access 'file-does-not-exist.txt': No such file or directory"
- If you redirect to a file before using
- Send both output and errors through *pipe
cat this_file_doesnt_exist 2>&1 | grep "No such file" -c
On a more technical level, Unix has descriptors that are kind of like IDs. 2
is the descriptor/id for stderr
, and 1
is the id for stdout
. In the context of redirection, using &
+ ID (&{descriptorId}
) means copy the descriptor given by the ID. This is important for several reasons - one of which is that 2>1
could be interpreted as "send output of 2 to a file named 1", whereas 2>&1
ensures that it is interpreted as "send output of 2 to descriptor with ID=1".
So... kinda...
2>&1
- can be broken down into:
stderr>&stdout
- ->
stderr>value_of_stdout
- ->
stdout = stderr + stdout
💡 You can also use anonymous pipes for these kinds of purposes
Suppress errors
Make sure to see above section about how descriptors work with redirection, but a handy thing to remember is:
# Pretend command 'foobar' is likely to throw errors that we want to completely suppress
foobar 2>/dev/null
This sends error output to /dev/null
, which basically discards all input.
Stderr Redirection - Additional reading
- great breakdown
- Good SO answers:
- advanced IO redirection
- good forum thread
Checking for Errors
You can use $?
to get the last exist status. Here are some examples.
Reminder: Anything other than 0 is an error ("non-zero exit code").
Using variables
Setting Variables
For setting variables, it depends on the variable type.
Normal variables:
VARIABLE_NAME=VARIABLE_VALUE
For this syntax, keep in mind that the key-pair can act like an environment variable if a command is immediately executed within the same process.
For example,
MY_VAR=ABC printenv
will show thatMY_VAR
has valueABC
, butMY_VAR=ABC
&&
printenv
will not - it will show thatMY_VAR
is unset as an environment variable.
Environment variables (persisted through session):
export VARIABLE_NAME=VARIABLE_VALUE
Escape spaces by enclosing VARIABLE_VALUE in double quotes
Reading Variables
Prefix with $
.
Example:
MYPATH="/home/joshua"
cd $MYPATH
echo "I'm in Joshua's folder!"
If you want to expand a variable inside a string, you can also use {}
(curly braces) around the variable to expand it.
Default / Global Variables
In addition to using printenv
to see all defined variables, you can also find lists of variables that usually come default with either the system shell, bash, or globally:
- Bash Internal Variables at a Glance
- TLDP - Internal Variables
- SS64 - Shell and environment variables
Storing into a variable
How to store the result of a command into a variable:
- There are two methods:
- Command/process substitution (easy to understand)
VARIABLE_NAME=$(command)
- However, this doesn't always work with complex multi-step operations
read
command (complicated) - works withredirection / piping
echo "hello" | read VARIABLE_NAME
- Command/process substitution (easy to understand)
Environment Variables
List all env values
printenv
Set an environment variable - current process and sub-processes
export VARIABLE_NAME=VARIABLE_VALUE
Set an environment variable - permanently
In order for an environment variable to be persisted across sessions and processes, it needs to get saved and exported from a config file.
This is often done by manually editing /etc/environment
:
-
- Launch editor:
sudo -H gedit /etc/environment
- Launch editor:
-
- Append key-value pair:
VAR_NAME="VAR_VAL"
- Append key-value pair:
-
- Save
The difference between setting a variable with
export
vs without, is similar to the difference in MS Windows, for usingsetx
vs justset
->export
persists the value.
Global path
Inspecting the path:
echo $PATH
# Line separated
echo $PATH | tr ':' '\n'
# For permanent paths
cat /etc/paths
Modifying the path:
- You can modify directly, with something like
export PATH=$PATH:my/new/path
- You can edit
/etc/paths
or add files to the path directory - In general, modifying the path can depend on OS and shell; here is a guide
Triggering / running a SH file
- Make sure it is "runnable" - that it has the execute permission
chmod +x /scriptfolder/scriptfile.sh
- Then call it:
/scriptfolder/scriptfile.sh
If you are having issues running from Windows...
- MAKE SURE LINE ENDINGS ARE \n and not \r\n
Also, make sure you specify directory, as in ./myscript.sh
, not myscript.sh
, even if you are currently in the same directory as script.
Keeping file open after execution
Add this to very end of file:
exec $SHELL
Note: this will interfere with scripts handing back control to other scripts; ie resuming flow between multiple scripts.
Strings
Special characters (newline, etc)
You need to prefix with $
before string to use special characters.
Example:
echo 'hello\ngoodbye'
- Prints:
- "hello\ngoodbye"
- Prints:
echo $'hello\ngoodbye'
- Prints:
- "hello
goodbye"
- "hello
- Prints:
You can also use
printf
for linebreaks:printf '\n\n'
Joining Strings
You can simply put strings together in variable assignment, like this:
FOO="Test"
BAR=$FOO"ing"
echo $BAR
echoes:
testing
You can also use variables directly in quoted strings:
FOO="Hello"
BAR="World"
echo "$FOO $BAR"
# If the variable is immediately adjacent to text, you need to use braces
FOO="Test"
BAR="ing"
echo "${FOO}${BAR}"
Joining Strings with xargs
By default, xargs appends a space to arguments passed through. For example:
echo "Script" | xargs echo "Java"
# "Java Script"
If we want to disable that behavior, we can use the -I
argument, which is really for substitution, but can be applied to this use-case:
echo "Script" | xargs -I {} echo "Java{}"
# Or...
echo "Script" | xargs -I % echo "Java%"
# Etc...
# Output: "JavaScript" - Success!
Converting to and from Base64 Encoding
Just use the base64
utility, which can be piped to, or will take a file input.
If you don't care about presentation, make sure to use
--wrap=0
to disable column limit / wrapping
Skip Lines in Shell Output String
If you have skip lines in output (for example, to omit a summary row), you can use:
tail -n +{NUM_LINES_TO_SKIP + 1}
# Or, another way to think of it:
# tail -n +{LINE_TO_START_AT}
# Example: Skip very first line
printf 'First Line\nSecond Line\nThird Line' | tail -n +2
# Output is :
# Second Line
# Third Line
To skip the last line:
head -n -1
Trim Trailing Line Breaks
There are a bunch of ways to do this, but the first answer provided is probably the best - using command substitution, since it automatically removes trailing newlines:
echo -n "$(printf 'First Line\nSecond Line\nThird Line, plus three trailing newlines!\n\n\n')"
You could also use the head -n -1
trick to remove the very last line.
If you want to remove all line breaks, you can use tr
for a easier to remember solution:
printf 'I have three trailing newlines!\n\n\n' | tr -d '\n'
If you are getting trailing line breaks with the
echo
command, you can also just use the-n
flag to disable the default trailing line break. E.g.echo -n "hello"
Generate a Random String
There is an excellent StackExchange thread on the topic, and most answers boil down to either using /dev/urandom
as a source, or openssl
, both of which have wide portability and ease of use.
/dev/urandom
- From StackExchange
# OWASP list - https://owasp.org/www-community/password-special-characters head /dev/urandom | tr -dc 'A-Za-z0-9!"#$%&'\''()*+,-./:;<=>?@[\]^_`{|}~' | head -c {length}
- I've had some issues with the above command on Windows (with ported utils)...
- From StackExchange
- OpenSSL
- For close to
length
:openssl rand -base64 {length}
- For exactly
length
:openssl rand -base64 {length} | head -c {length}
- For close to
Security, Encryption, Hashing
Quick Hash Generation
If you need to quickly generate a hash, checksum, etc. - there are multiple utilities you can use.
sha256sum
- Example:
echo -n test | sha256sum
- Example:
cat msg.txt | sha256sum
- Example:
openssl dgst
(-sha256
,-sha512
, etc.)- Example:
echo -n test | openssl dgst -r -sha256
- Example:
openssl dgst -r -sha256 msg.txt
- Example:
I'm using
-r
withopenssl dgst
to get its output to match the common standard that things likesha256sum
, Window'sCertUtil
, and other generators use.
🚨 WARNING: Be really wary of how easy it is to accidentally add or include newlines and/or extra spacing in the content you are trying to generate a hash from. If you accidentally add one in the shell that is not present in the content, the hashes won't match.
There are even more solutions offered here.
How to generate keys and certs
- SSH Public / Private Pairs: Using ssh-keygen (available on most Unix based OS's, included Android)
- You can run it with no arguments and it will prompt for file location to save to
ssh-keygen
- Or, pass arguments, like
-t
for algorithm type, and-f
for filename,-c
for commentssh-keygen -t rsa -C "your_email@example.com"
- Technically, the private/public keys generated by this can also be used with OpenSSL signing utils
- You can run it with no arguments and it will prompt for file location to save to
- The standard convention for filenames is:
- Public key:
{name}_{alg}.pub
- Example:
id_rsa.pub
- Example:
- Private key:
- No extension:
{name}_{alg}
- Example:
id_rsa
- Example:
- Other extensions
.key
.pem
.private
- Doesn't really matter; just don't use
ppk
, since that it very specific toputty
- No extension:
- Public key:
- Standard Public / Private Pairs: OpenSSL
💡 The final text in a public key, which is plain text that looks like
username@users-pc
actually does nothing, and is just a comment; you can set it on creation with-C mycomment
if you like, or edit afterwards. But again, no impact.
How to use Public and Private Key Signing
Generally, the most widely used tool for asymmetric keys with Bash (or even cross-OS, with Windows support) is the OpenSSL CLI utilities.
Here are some resources on how to use OpenSSL for public/private key signing:
- https://www.zimuel.it/blog/sign-and-verify-a-file-using-openssl
- https://wiki.openssl.org/index.php/Command_Line_Utilities
Create new files, or update existing file timestamps
- Touch without any flags will create a file if it does not exist, and if it does already exist, update timestamp of "last accessed" to now
touch "myfolder/myfile.txt"
- If you want touch to only touch existing and never create new files, use
-c
touch -c "myfile.txt"
- Specifically update last accessed stamp of file
touch -a "myfolder/myfile.txt"
- specifically update "Last MODIFIED" stamp of file
touch -m "myfolder/myfile.txt"
- You can also use wildcard matching
touch -m *.txt
- and combine flags
touch -m -a *.txt
Verify Files
You can verify that a file exists with test -f {filepath}
. Handy guide here.
If you want to check the line endings of a file, for example to detect the accidental usage of CRLF
instead of LF
, you can use file MY_FILE
.
Getting Meta Information About a File
# For identifying image files. Part of imagemagick
# @see https://linux.die.net/man/1/identify
identify my-file.jpg
identify my-file.pdf
# Can (try) to detect and display file type
# @see https://linux.die.net/man/1/file
file my-file.txt
# Get mime
file -I my-file.txt
Hex View
To view the hex of a file, you can use xxd {FILE_PATH}
Deleting
- Delete everything in a directory you are CURRENTLY in:
- Best:
find -mindepth 1 -delete
- UNSAFE!!!
rm -rf *
- Better, since it prompts first:
rm -ri *
- Best:
- Delete everything in a different directory (slightly safer than above)
rm -rf path/to/folder
- Delete based on pattern
find . -name '*.js' -delete
File Management
LS and File Listing
📄 LS - SS64
LS Cheatsheet
How to... | Cmd |
---|---|
Show all files | ls -a |
Show filesizes (human readable) | ls -lh |
Show filesize (MB) | ls --block-size=MB |
Show details (long) | ls -l (or, more commonly, ls -al ) |
Sort by last modified | ls -t |
âš¡ -> Nice combo:
ls -alht --color
(or, easier to rememberls -halt --color
). All files, with details, human readable filesizes, color-coded, and sorted by last modified.
ls - show all files, including hidden files and directories (like .git
)
ls -a
List directory sizes
du -sh *
Print a Directory Tree View with Bash
Both Windows and *nix support the tree
command.
If, for some reason, you can't use that command, some users on StackOverflow have posted solutions that emulate tree using find + sed.
Count Matching Files
For a faster file count operation, you can use find
's printf
option to replace all filenames with dots, and then use wc
character count to count them. Like this:
find {PATH} {FILTER} -type f -printf '.' | wc -c
Here is an example, to count all the .md
Markdown files in a /docs
directory:
find ./docs -iname "*.md" -type f -printf '.' | wc -c
Syncing Files
Rsync
- Example:
rsync -az -P . joshua@domain.com:/home/joshua/my_dir
-a
= archive mode (recursive, copy symlinks, times, etc - keep as close to original as possible)-z
= compress (faster transfer)-P
=--partial
+--progress
(show progress, keep partially transferred files for faster future syncs)
- Use
--filter=':- .gitignore'
to reuse gitignore file for exclusions - You can use
--filter
multiple times and they will be combined - Use
--exclude
to exclude single files, directories, or globs, also allowed multiple times - Use
--include
to override filter - Use
--dry-run
to preview
Show progress bar / auto-update / keep console updated:
Great SO Q&A
Find executable paths
If you are looking for the bash equivalent of Window's "where" command, to find how a binary is exposed, try using which
. E.g. which node
.
Symlinks (symbolic links)
You can use the ln
command (ss64) to create symbolic links.
# Works for both files and directories
ln -s {realTargetPath} {symbolicFileName}
# If you need to update an existing symlink, you need to use "force"
ln -sf {realTargetPath} {symbolicFileName}
In general, it is best / easiest to always use absolute paths for the targets.
Evaluating symlinks
You can use ls -la
to list all files, including symlinks.
If you just want to see resolved symlinks, you can use grep - ls -la | grep "\->"
If you want to inspect a specific symlink, use readlink -f {SYMLINK}
On macOS, install coreutils, and use
greadlink -f
instead
Networking
💡 An excellent package to get for working with network stuff is
net-tools
. It is also what containsnetstat
, which is great for watching active connections / ports.
cURL
- Good cheatsheets
- Show headers only
curl -I http://example.com
- Search for something
- You can't just pipe directly to grep or sed, because curl sends progress info
stderr
, so use--silent
flag:curl --silent https://joshuatz.com | sed -E -n 's/.*<title>(.+)<\/title>.*/\1/p'
- Prints:
Joshua Tzucker's Site
- Prints:
- You can't just pipe directly to grep or sed, because curl sends progress info
- Download a file
- Specify filename:
curl -o {New_Filename_Or_Path} {URL}
- Reuse online filename:
curl -O {URL_with_filename}
- Specify filename:
- Follow redirects:
-L
- Useful for downloading DropBox links (or else you get an empty file):
curl -L -o myfile.txt https://www.dropbox.com/s/....?dl=1
- Useful for downloading DropBox links (or else you get an empty file):
Networking - Checking DNS Records and Domain Info
dig
- Default (A records + NS):
dig {DOMAIN}
- All:
dig {DOMAIN} ANY
- Specific type:
dig {DOMAIN} {RECORD_TYPE}
dig joshuatz.com cname
- Default (A records + NS):
host
- Default (describes records):
host {DOMAIN}
- All:
host -a {DOMAIN}
- Specific type:
host -t {RECORD_TYPE} {DOMAIN}
- Default (describes records):
nslookup
- (might not be available on all distros, but useful since this works on Windows too. However,
nslookup
also seems less reliable...) - Default (A record):
nslookup {DOMAIN}
- All:
nslookup -d {DOMAIN}
- Equivalent to
nslookup -t ANY {DOMAIN}
- Equivalent to
- Specific type:
nslookup -querytype {RECORD_TYPE} {DOMAIN}
- OR:
nslookup -t {RECORD_TYPE} {DOMAIN}
- OR:
- (might not be available on all distros, but useful since this works on Windows too. However,
Networking - How do I...
- Resolve DNS hostname to IP
getent hosts HOST_NAME | awk '{ print $1 }'
- Credit goes to this S/O
- Download a file and save it locally with bash?
- You can use
wget
orcURL
(S/O):wget -O {New_Filename_Or_Path} {URL}
curl -o {New_Filename_Or_Path} {URL}
- If you want to just use the name of the file as-is, you can drop
-O
with wget - If you want to get the contents of the file, and pipe it somewhere, you can use standard piping / redirection. E.g.,
curl ifconfig.me > my_ip_address.txt
- You can use
- Transfer files across devices using bash?
- You can transfer over SSH, using the
scp
command- Example:
scp my-file.txt joshua@1.1.1.1:/home/joshua
- Example:
scp -i ssh_pkey my-file.txt joshua@1.1.1.1:/home/joshua
- Example:
scp -rp ./my-dir joshua@1.1.1.1:/home/joshua/my-dir
- Example:
- Another option good option is
rsync
, especially for frequent syncs of data where some has stayed the same (it optimizes for syncing only what has changed). - Alternatively, you could use cURL to upload your file, to a service like transfer.sh, and then cURL again on your other device to download the same file via the generated link
- You can transfer over SSH, using the
- Find the process that is using a port and kill it?
- Find PID:
- Linux:
netstat -ltnp | grep -w ':80'
- macOS:
sudo lsof -i -P | grep LISTEN | grep :$PORT
(credit) (you often don't needsudo
with this)
- Linux:
- Kill by PID:
kill ${PID}
- With force:
kill -SIGKILL ${PID}
- With force:
- Find PID:
- Find process by command (not process name)?
- Get all:
ps aux | grep "my_search_string"
- Note:
aux
is not preceded by-
because these are BSD style options
- Note:
- Slightly nicer, if you are just looking for PID and uptime:
ps -eo pid,etime,command | grep "my_search_string"
- Get all:
- Find process by PID
ps -p ${MY_PID}
Archives
How do I...
- Extract / unpack a tarball
tar -xvf {filename}
- For more options, see docs, and here is a helpful page with examples
- Extract / unpack a
.zip
archive- This is not natively supported on many flavors of Linux, but can be added by installing and using a program such as
unzip
- This is not natively supported on many flavors of Linux, but can be added by installing and using a program such as
Handy Commands for Exploring a New OS
Command | What? |
---|---|
uname -a |
Display system OS info (kernel version, etc.) |
lsb_release -a |
Display distribution info (release version, etc.) |
apt list --installed |
List installed packages |
crontab -l or less /etc/crontab |
View crontab entries |
lshw |
View summary of installed hardware |
dpkg --print-architecture or uname -p |
Show CPU architecture type (amd64 vs arm64 vs i836 , etc.) |
Get Public IP Address
Easy mode: curl http://icanhazip.com
Lots of different options out there.
Echoing out Dates
The main command to be familiar with is the date
utility.
You can use date +FMT_STRING
to specify the format to apply to the output.
Common Formats:
Command | What | Sample |
---|---|---|
date |
Prints current date/time in %c format |
Sat Nov 28 03:56:03 PST 2020 |
date -u +"%Y-%m-%dT%H:%M:%SZ" |
Prints current date, a full ISO-8601 string | 2020-11-28T12:11:27Z |
date +%s |
Seconds since epoch | 1606565661 |
Get Date as MS Since Epoch
If you don't actually need the full precision of milliseconds, but need the format / length, you can use: date +%s000
If you really need as-close-to-real MS timestamps, you can use any of these (some might not work on all systems):
date +%s%3N
date +%s%N | cut -b1-13
echo $(($(date +%s%N)/1000000))
Above solutions were gathered from this S/O question, which has a bunch of great responses.
You could also always use
node -p "Date.now()"
if you have NodeJS installed.
User Management
Adding or Modifying Users
Use adduser {username}
to add a new (non-root) user.
If you want to create a new user, but also grant them sudo
/ admin privileges, you can either:
- Add to sudo group while creating
useradd --groups sudo {username}
- OR:
adduser {username] --ingroup sudo
- Create user first, then add to sudo group
- Create user:
adduser {username]
- OR:
useradd {username}
usermod -a -G sudo {username}
- Create user:
💡 Note: The above commands could also be used for adding to groups other than
sudo
- just swap outsudo
with the group you want to use
🚨 Warning: Creating a new user will not automatically grant them SSH access. See SSH Notes for details.
The
adduser {USER} {GROUP}
syntax only works if the user already exists.
Add User to Group
usermod -a -G groupname username
(also see above section(s))
Listing User Groups
You can use groups
to list all groups you are a part of, or use groups {USER}
for a specific user.
For listing all groups on a system, you might be able to use less /etc/group
or getent group
(see more here).
Deleting a User
Use userdel {USERNAME}
to delete a user. Optionally, pass the -r
flag to also delete their home directory.
Session, Window, and Screen Management
As an alternative to Screen, or tmux solutions, you might want to check out a task execution queuing and management system, like
pueue
Screen
If you need to manage multiple sessions, which you can leave and resume at any time, screen
is the usual go-to program.
Screen Docs: linux.die.net, SS64
Command | What it Does |
---|---|
screen -S {REF} |
Create a named session |
screen -ls |
List active sessions |
screen -d -r {REF} |
Detach, and then attach to existing session |
screen -r {REF} |
Attach to existing session |
screen -XS {REF} quit |
Kill a different session |
echo $STY |
View current session name |
CTRL + a , : , sessionname |
View current session name |
CTRL + a , d |
Detach screen from terminal (i.e., leave without stopping what is running) |
CTRL + a , k |
Kill the current screen / session (with confirmation) |
tmux
The default
--help
command withtmux
is not super helpful. I would recommendman tmux
or this cheatsheet as alternatives
Here are some of my most-used commands
Command | Description |
---|---|
tmux new -s {SESSION_NAME} |
Create a named session |
tmux attach -t {SESSION_NAME} |
Attach to a named session |
tmux ls |
List sessions |
tmux info |
Show all info |
CTRL + b |
The main hotkey combo to enter the tmux command mode - i.e., what you need to press first, before a secondary hotkey. |
CTRL + b , d |
Detach from the current session |
CTRL + b , [ |
Enter copy mode. Use ESC to exit |
CTRL + B , s |
Interactive session switcher, from inside an active session. Faster than detaching, listing, and then re-attaching, plus you can see a preview before switching. |
tmux Configuration
tmux Config File - .tmux.conf
You can often configure tmux settings via the tmux command prompt (entered via CTRL + B
, :
), but for portability and easier management, it can preferable to store configuration settings in a dedicated file. Tmux supports this by default via a file at ~/.tmux.conf
(but you can also explicitly pick a different file location and name if you want).
Here are some quick notes on the usage of this configuration file:
- By default, tmux only reads & loads the config file once, on service startup. If you make changes and want to see them reflected in tmux, you need to do one of the following
- Use the
tmux source-file
command- E.g.,
tmux source-file ~/.tmux.conf
- You should run this outside of tmux itself (not inside a session). You can use this inside an existing tmux session and will take effect. However, it will only take effect for that specific session if you do that, as opposed to all.
- You will also need to detach and re-attach to sessions to read the change, or also run the command inside each session
- E.g.,
- Completely restart the tmux service (different from restarting a session)
- Use the
- Comments are allowed, and use the standard shell
#
prefix
Enabling Scroll in tmux
You can use CTRL + B
, then [
to enter copy mode, then scroll or key around (and copy text if you wish), using ESC
to exit the mode.
You can also do CTRL + B
, :
, set -g mouse on
to turn on mouse mode (or do so through your tmux config file). However, this tends to interfere with copy-and-pasting and generally is not a super smooth experience.
Troubleshooting
- Input has stopped appearing as you type it
- This can happen for a number of reasons. The quick fix is usually to use
reset
orstty sane
.
- This can happen for a number of reasons. The quick fix is usually to use
- Echo keeps evaling a variable, when I meant to just print it with variable substitution
- Check for backticks, or other special unescaped characters that could introduce an eval situation
- You keep getting the No such file or directory error, but only when assigning to a variable
- Make sure you don't accidentally have a leading
$
, like$MY_VAR=MY_PATH
- Make sure you don't accidentally have a leading