Resources
- https://www.slideshare.net/AllThingsOpen/the-many-ways-to-test-your-react-app
- https://github.com/vjwilson/many-ways-to-test-react
- https://testingjavascript.com/
- https://www.slideshare.net/RyanRoemer/cascadiajs-2014-making-javascript-tests-fast-easy-friendly
- https://jaketrent.com/post/testing-react-with-jsdom/
- https://reactjs.org/docs/testing-recipes.html
- https://github.com/thlorenz/proxyquire
Common Test Patterns
Where to put tests?
The main options are to either:
- Store in completely separate directory, e.g.
/__tests__
- Store alongside source files - e.g.
src/main.js
andsrc/main.tests.js
Common Subdirectories or Companions
There are some common subdirectories and/or types of test supports:
fixtures
- This should contain static data, that can be used to test with
snapshots
- Snapshots are usually files that are not hand-coded; they are generated based on your code, and usually represent a section of output
- The idea is to not re-generate them every time; you generate them when your app is in a good state
- A snapshot test generates a new snapshot whenever it runs, and compares it with the stored snapshot; if they don't match, that is an instant fail
- A snapshot doesn't have to about UI elements; it just needs to be of any serialisable value.
- The usefulness of these are often debated - see Effective Snapshot Testing, by Kent C. Dodds
- One major caveat is that you need to be very sure that the stored snapshot is a valid snapshot; anything else is worse than having no snapshot at all.
- Snapshots are usually files that are not hand-coded; they are generated based on your code, and usually represent a section of output
utils
(orhelpers
)- Contains methods / classes / utilities that help with your tests
- Should probably not contain any actual data (leave that in fixtures)
mocks
- Generic, vague definition: A mock is a fake version of a real "thing", that emulates the real behavior, but otherwise is an incomplete (and optimally much smaller) version of the actual thing.
- Useful for testing because often using the full version of everything can slow down tests
- Mocking is a broad topic in testing. Also related to
stub
,spy
, anddummy
- Good intro from Circle-Ci
- Jest has some built-in support for manual mocking, but for everything else or with other testers, you usually want something like sinon.js
- Generic, vague definition: A mock is a fake version of a real "thing", that emulates the real behavior, but otherwise is an incomplete (and optimally much smaller) version of the actual thing.
Mocks vs Stubs
General summary:
- A stub is usually extremely simple (a "stub" of a thing!) - essentially just a tiny bit of code that says "given x input always return y" - they are not dynamic, and the expected input and output is known statically. In comparison, a mock, although fake and incomplete, should still have an interface that mirrors the real thing; this distinction means that a mock ensures methods calls are followed correctly.
Since I had some trouble making the distinction in my brain click, I'm going to write out a few different ways to summarize the distinction outlined above:
- One of my favorites: "Stubs don't fail your tests, mock can" (S/O)
- Another great distinction: Mocks are testing behavior, stubs are testing state
- Another way to think: Essentially all mocks are stubs - or, an arrangement of stubs
If you are still feeling stuck, there are lots of great responses to both of these Stack Overflow questions:
- SO: What's the difference between a mock & stub?
- SO: What's the difference between faking, mocking, and stubbing?
Jest
Jest: Resources
- CLI
- Config
- jest-community/awesome-jest
- Egghead Video: Test JavaScript with Jest (Dodds)
- Cheatsheets
Jest: Gotchas
- Don't confuse
clearAllMocks
withresetAllMocks
!resetAllMocks
actually removes any mocked implementation that you have configured. - Use
resetModules
with extreme caution - Circular references can cause all imports / requires to be undefined / empty from those files at test runtime --> See https://stackoverflow.com/a/67743329
Jest: File Structure / Glob Patterns
Main doc: config / testRegex
- Default pattern is
(/__tests__/.*|(\\.|/)(test|spec))\\.[jt]sx?$
Default Config
- Directories:
/__tests__
- Files (where
{ext}
is(.js | .jsx | .ts | .tsx)
).test.{ext}
.spec.{ext}
test.{ext}
spec.{ext}
Jest: Only Executing Certain Tests
If you are trying to execute only certain test files, you can specify a different regex pattern that Jest should use to discover test files. You can specify this in a hard-coded config (i.e. in jest.config.js
or package.json
), or dynamically in the terminal, by specifying a regular expression pattern after jest
.
WARNING: Most of the Jest CLI arguments that take patterns accept RegEx patterns, not standard globs, which tends to surprise many users.
If Jest is saying that
testRegex
has 0 matches (it can't find any test files matching your pattern), check if you havetestMatch
defined in your Jest config. These seem to conflict, and it can be hard to override via CLI. Here are some possible fixes:
- Try passing in
--roots .
as part of your command, especially if you used<rootDir>
as part of your Jest config- There is probably a way to use your system's glob expansion support, and pass a file list, but you would need to first convert it to a regular expression, which might be a little bothersome in bash
Similar options exist for only running certain named tests, e.g. with --testNamePattern={PATTERN}
.
You can use
--listTests
to verify which tests will run without actually running them. However, this is not accurate with things like--testNamePattern
, which are not evaluated until jest actually parses the test files.
Jest: Examples
Jest: Intellisense for Config File
If you want TS / Intellisense support for the jest.config.js
file, you can use JSDoc to type annotate the config object.
This import path might change, and there might be a better option available.
// @ts-check
/**
* @type {import('@jest/types').Config.InitialOptions}
*/
module.exports = {
// Jest config goes here
// Should get type-checking and auto-complete!
}
Jest: TypeScript Global Types
- Deprecated:
@types/jest
. - Recommended: Use
@jest/globals
and explicit imports to force types to stay in sync withjest
module
If trying to use @types/jest
in plain JS with @ts-check
, you can use triple slash:
// @ts-check
/// <reference types="jest" />
describe('My Test', () => {
//
});
Jest: JSDOM with Jest
By default, Jest ships with and uses JSDOM as the default test environment. It also exposes associated variables as globals, which means you can use things like document.querySelector()
inside Jest tests with no extra setup required.
Also see: Jest - DOM Manipulation
Jest: Access to JSDOM config, settings, etc.
Unfortunately, Jest seems to mostly just expose the DOM from JSDOM (via document
global), and not the controls of JSDOM itself. For example, by default you can't access the JSDOM config, or use JSDOM methods directly.
There are some workarounds though:
- Pass options to JSDOM through
testEnvironmentOptions
- If all you need to do is modify JSDOM defaults, and don't need access to JSDOM itself, this should work fine
- Install jsdom as a dependency and just use normally
- Use
jest-environment-jsdom
(ships with Jest) to access the config- See directions here
- Install a package that automatically does the above step, of exposing Jest via
jest-environment-jsdom
- simon360/jest-environment-jsdom-global
Jest - Using Window
To use window
in your tests, you either need to set the testEnvironment
setting to jsdom
, or do one of the following:
- Add
window
underglobals
{ "jest": { "globals": { "window": {} } } }
- Use
global
instead, or define window as a property on global (see replies to this SO)
Ava
Ava: Grouping Tests
For those used to grouping tests and/or deep nesting via BDD language like describe()
, you might be disappointed; this is largely unsupported in Ava. The only recommended way at the moment (unless things change) to group tests (or emulate test suites) is by splitting up your tests by file.
Ava: Specifying Files
There are two options for using multiple test files with Ava:
- Option A: Make sure your files match the glob pattern matching of Ava
- Option B: Pass specific files in options
- via the CLI
- Pass glob pattern as main argument to ava
- E.g.
ava 'my-test-dir/**/*.run-once.*
- via the config
- Use same glob syntax as above, but pass as array of glob strings under
"files"
key - E.g.:
{ "ava": { "files": [ "my-test-dir/**/*.run-once.*", "fred-tests/**/*" ] } }
- Use same glob syntax as above, but pass as array of glob strings under
- via the CLI
🚨 WARNING: The
match
option is for matching test names, not file names.
🚨 WARNING: I've had issues with specifying files via CLI refusing to override config (provided by
package.ava.files
). Something to be aware of.
Mocha
Important links:
Mocha: How to run a specific test
mocha --grep "{describeTextPattern}"
Note: This is to run a specific test, not a test file.
Mocha: File Structure / Glob Patterns
📄 Main doc section: "The Test Directory"
- Defaults
- Glob pattern:
"./test/*.{js,cjs,mjs}"
- Glob pattern:
- Custom
- CLI:
mocha --recursive "./customdir/*.js"
mocha "./customdir/**/*.js"
- CLI:
Mocha: How to run a single test file / specific file
Just pass as last argument / input to mocha CLI. In the docs, they call this spec
.
If you want to add a dedicated command to your package.json, so devs can run with a bunch of hard-coded flags before the filename, you might run into issues if you have hardcoded the glob pattern into mocha.opts
or a different config file (see wont-fix GH issue).
If the above is true for you, I would recommend removing the hardcoded pattern from your options file and rewrite scripts to look something like this:
{
"scripts": {
"test": "mocha {globPattern}",
"test-file": "mocha"
}
}
Mocha - TypeScript
Mocha has an example directory for using it with TypeScript.
As of writing this, you can use Mocha with ts-node and CJS (CommonJS) with minimal setup, OR, you can use ESM, but that requires a lot more tweaking.
For Mocha + ts-node + ESM, here is most of what I had to change:
tsconfig.json
- Explicitly set module to ESM (e.g.
esnext
orES2020
) - Set
moduleResolution
tonode
- Explicitly set module to ESM (e.g.
.mocharc.json
- In addition to using
"require": "ts-node/register"
, also add"loader": "ts-node/esm"
- In addition to using
- TS source code
- Fixup code that breaks when targeting EMS and ts-node
- For details, see my ESM troubleshooting page
- Fixup code that breaks when targeting EMS and ts-node
If your TS repo is set to target ESM, but you don't want to have to make all the above changes, an easier solution might be to tell ts-node to use commonjs
for your mocha tests. There are a few ways to do this with ts-node (--project
, --compiler-options
, etc.), but the easiest for Mocha is overriding a tsconfig
option via the TS_NODE_COMPILER_OPTIONS
environmental variable. However, there is a wrinkle to this - if you've used "type": "module"
in your root package.json
file, then ts-node
will throw Error [ERR_REQUIRE_ESM]: Must use import to load ES Module
errors even if TS is targeting commonjs
.
Mocha - Config
Warning: Be careful about the use of
node-options
in a Mocha config file; it can overriderequire
andloader
, even if the array is empty (as far as I can tell)
Code coverage
Most popular is probably istanbul; it's built into Jest, and comes recommended with Ava.
NYC is a wrapper CLI for istanbul, which you might see referred to often
Ignoring stuff
Istanbul / NYC
Some issues I've run into:
else
branch completely ignored, or something like that, despite console logs clearly showing code has executed- Make sure you are actually returning something from the code that is being reported as not covered
- If you are using
async / await
, check that:- there is no place you accidentally forgot to
await
something - Your test runner is properly configured to handle async stuff
- Changing to a
.then()
approach doesn't fix it
- there is no place you accidentally forgot to
- It reports 0 (zero) lines covered, even after configuring with
"all": true
and explicitinclude
andexclude
arrays.- If you are using native ESM, which is now supported in Node, it appears as though NYC does not yet support it. A nice drop-in replacement that I found is c8. Or use the Istanbul
esm-loader-hook
package.
- If you are using native ESM, which is now supported in Node, it appears as though NYC does not yet support it. A nice drop-in replacement that I found is c8. Or use the Istanbul
Good to know:
- If you want a nice HTML view, add an appropriate reporter, such as
--reporter=html
- Default output directory is
/coverage
, which you can open theindex.html
of in a browser; no local server required. --reporter=lcov
produces bothhtml
and standard output, so it is my recommended default
- Default output directory is
- Multiple reporters are passed like this:
--reporter=lcov --reporter=text
Using Codecov
Codecov is an online service for hosting code coverage reports and integrating those reports with Github PR's, commits, etc. It is $10/month/user, but free for open-source / public repos. It also has an incredibly streamlined process; you don't even need to use tokens if it is a public repository.
Codecov - Simple Setup
Two easy ways I would recommend to integrate Codecov are either the CLI or Github Actions. The benefit to GH actions is that you can also get automated comments on PRs with changes to code coverage %.
CLI:
- Call your generator / reporter to generate the coverage file against your test command, and then call
npx codecov
to upload it- Example:
npx nyc --reporter=lcov npm test && npx codecov
- Example:
- Details for NYC: here
Github Actions:
- Call your reporter to generate the coverage file against your test command
- Example:
nyc --reporter=lcov npm run test
- Example:
nyc --reporter=lcov ava
- Example:
c8 --reporter=lcov npm run test
- Example:
- Add the Codecov Github Action (
codecov/codecov-action
) to your Github workflow YAML file- If your repo is public, you don't need to do anything other than add
uses: codecov/codecov-action@v1
- If your repo is private, you will need to setup and pass a token
- If your repo is public, you don't need to do anything other than add
- For more details, read the docs
As an alternative to the GitHub Action, you can use the NPM package instead, but since GH actions is not supported 100% with that package, you might need to tweak the default command.
Once you have setup the upload process, you can grab a badge to stick in your README by following this syntax:
[![Code Coverage Badge](https://codecov.io/gh/{USER}/{REPO}/branch/{BRANCH}/graph/badge.svg)](https://codecov.io/gh/{USER}/{REPO}/branch/{BRANCH})
Browser / DOM Testing
Running browser-based code
First, it is important to note that there are different types of testing methods when it comes to testing code that manipulates or generates DOM / HTML / browser code, and/or uses standard browser APIs.
In general, these can be divided into two categories:
- Mocking large parts of the browser DOM logic in JS - this is pretty much just JSDOM
- Essentially the engine / DOM processing part of the browser is mocked entirely in JS, but not any of the actual rendering / UI / etc.
- Tons of testing libraries use this as the actual environment to run your test in.
- Examples: Jest, Enzyme, Testing-Library (via Jest), and more
- There are also companion libs for working with the global
window
object / augmenting - Advantage: It is very fast compared with actually running a full browser
- Disadvantage: It is not a true browser test, and can't be used for tests that verify UI, rendering, layout, etc.
- There are also some huge holes in what standard web APIs are emulated in JSDOM. For example...
HTMLElement.innerText
is still not supported in 2020, despite having about 99% of real-world browser support.window.getSelection()
- etc.
- There are also some huge holes in what standard web APIs are emulated in JSDOM. For example...
- Automated running of an actual browser process
- Basically just an automated runner that hooks into a real browser process
- Popular examples are playwright, puppeteer, Cypress, and TestCafe.
- Advantages:
- Apart from manual testing by hand, this is basically the closest you can get to a real 1:1 test of your app that puts it through the same environment that your users use.
- This means you can test against browser quirks, different rendering engines, etc.
- Most of these automated browser testers support a "headless" mode, which basically runs the browser without bothering to actually render the pixels on the screen (but still capturing the output).
- Although this doesn't give enough of a speed boost to match something like JSDOM, this is still a considerable performance boost over non-headless
- Some testers support multiple browsers with the same API - Microsoft Playwright is a somewhat new entrant that does this and looks to be extremely promising.
- Can be used for more than just testing!!!
- For example, a common alternative use-case is generating screenshots or PDF captures of generated webpages.
- Apart from manual testing by hand, this is basically the closest you can get to a real 1:1 test of your app that puts it through the same environment that your users use.
- Disadvantages
- Slow(er) and heavier: Running a real browser, headless or not, requires more resources than adding a some extra NodeJS code to an existing NodeJS app.
- Requires more resources (CPU, RAM, and storage) for wherever your tests are running (local, Jenkins, etc.)
- More complex: There are extra layers of abstraction (JS, APIs, OS details, etc), different Operating Systems and their peculiarities, and many more details that have to work together to make this work.
- Slow(er) and heavier: Running a real browser, headless or not, requires more resources than adding a some extra NodeJS code to an existing NodeJS app.
Another way to look at the types above is that JSDOM is usually good enough for unit, or integration tests, where all you need is a quick diff between expected HTML and actual HTML, but for an "end-to-end" (E2E), or functional test that is for something that runs in a browser, and needs to test behavior or appearance, you probably need a true browser runner.
Reading in HTML or JSX Files
If you are building code that manipulates the DOM and looking to test that functionality, you might be wondering how to actually feed elements into the tester.
Vanilla HTML
If you are trying to load vanilla HTML into the test, you have several options:
- Composing an HTML string manually
- Regular String:
document.body.innerHtml = '<p>' + myText + '<p>';
- Template Literal:
document.body.innerHtml = `<p>${myText}</p>`;
- If you are using JSDom, you might want to pass it in the constructor:
const dom = new jsdom('<p>' + myText + '<p>')
- Regular String:
- Feeding in a saved HTML file with
fs
document.body.innerHtml = fs.readFileSync('./fixtures/test-page.html', 'utf8')
- Programmatically with DOM APIs
document.createElement()
etc.
- Via explicit loader functions (JSDom)
const dom = await JSDOM.fromFile('./fixtures/test-page.html')
This is a good summary for JSDom: https://dustinpfister.github.io/2018/01/11/nodejs-jsdom/
JSX
Since so many testing libraries and utilities are focused on React at the moment (understatement), the nice thing is that pulling in and testing JSX is baked into most libraries and made to be as easy as possible. Usually, the setup for pulling in JSX is just two steps:
- Import the actual component, just like you normally would in React
- E.g.
import MyComponent from '../src/components/MyComponent'
(.jsx
file)
- E.g.
- Load the JSX into the DOM, or into a snapshot (examples of methods below)
Loading JSX into DOM or Snapshot
Examples / basic API with different test libs:
Enzyme
import React from 'react';
import MyComponent from '../src/components/MyComponent';
import { shallow } from 'enzyme';
// ...
const wrapper = shallow(<MyComponent myProp={val} />);
import React from 'react';
import MyComponent from '../src/components/MyComponent';
import { mount } from 'enzyme';
// ...
const wrapper = mount((
<MyComponent myProp={val} />
));
React-Testing-Library
React-Testing-Library render:
import React from 'react';
import MyComponent from '../src/components/MyComponent';
import { render, fireEvent, waitForElement } from '@testing-library/react'
// ...
const testUtils = render(<MyComponent myProp={val} />);
Jest
import React from 'react';
import MyComponent from '../src/components/MyComponent';
import renderer from 'react-test-renderer';
// ...
const component = renderer.create(<MyComponent myProp={val} />);
- They actually recommend that you use
react-testing-library
, orEnzyme
- For a
TestUtils
approach, see below ("React Built-Ins")
React Built-Ins
There are actually some built-in JSX test utilities that you probably already have access to if you are using React. They are in the react-dom
package, which you should have as a default dependency if you used create-react-app
.
There are a bunch of helpful methods, but the most helpful for loading JSX is renderIntoDocument()
:
import React from 'react';
import MyComponent from '../src/components/MyComponent';
import ReactTestUtils from 'react-dom/test-utils';
// ...
const component = ReactTestUtils.renderIntoDocument(<MyComponent myProp={val} />);
- Etc...
Additional dependencies
If you are starting with create-react-app, or a plug-n-play test solution, dependencies should be pretty straightforward. Regardless, usually there are two main dependencies required:
- React
- Required, since JSX really just transpiles to calls to
React.createElement()
- Required, since JSX really just transpiles to calls to
- A library method to load the React element into either JSDOM or a snapshot
- The Jest docs has a pretty good overview of how this works across different test tools
- Jest uses
react-test-renderer
for snapshots, but for loading into JSDOM, there are multiple options (see above section on loading JSX into DOM)
Depending on your needs, you might also need Babel to transpile code.
Executing JavaScript with JSDOM
Script Execution Settings
First, if you are looking for how to let scripts (inline or external) load and execute on the initial DOM load, take a look at the "executing scripts" part of the main readme. Basically, you need to tweak the default settings:
{
"testEnvironmentOptions": {
"runScripts": "dangerously",
"resources": "usable"
}
}
How to add or eval scripts on the fly
Once you already have a DOM loaded, how do you programmatically add / execute new scripts?
There are a few ways to execute JS with JSDOM, and not all are equal.
Via script tag
Just like in a real browser, an option for executing scripts is to actually inject new <script>
tags into the DOM. For example, take this sample Jest test:
test('Injects script via tag', () => {
const scriptElem = document.createElement('script');
const scriptText = document.createTextNode(`window.testString = 'abc123';`);
scriptElem.appendChild(scriptText);
document.body.appendChild(scriptElem);
// window = global here
expect(global.testString).toEqual('abc123');
});
Warning: Although this works fine in browsers, setting the text of the tag via
scriptElem.innerText = ""
does not work in JSDom for some reason; it ends up injecting an empty script tag.
Via eval()
You can also execute code via window.eval()
, again similar to a native browser. In both the script tag method and eval, the executed code has access to the DOM managed by JSDOM as well. For example:
test('Injects via eval', () => {
global.eval(`document.querySelectorAll("pre").forEach(elem => elem.remove());`);
const preCount = document.querySelectorAll("pre").length;
expect(preCount).toEqual(0);
});