Being able to precisely control what failures in underlying systems occur and at what time can be really useful in achieving a fast and stable test suite. While I am a big proponent of dependency inversion and being able to control dependencies via the explicit injection points in your API, sometimes it's impractical to do so. This is where [`fail`](https://crates.io/crates/fail) can help us immensely, providing an escape hatch for situations like those as it allows to inject failures into previously defined failure points.
It comes at a price though. If you would mix your other unit tests and tests activating fail points you will notice some unexpected failures in the test suite. As `cargo test` runs tests in parallel by default, the tests activating a fail point can interfere with another test that did not want that fail point active at all that is ran at the same time. The crate authors [recommend](https://docs.rs/fail/#usage-in-tests) running all of the tests using fail points in a separate executable and using `FailScenario` to serialise test execution.
There is another way, that I found simpler for the way I write tests, if you allow for yet another helper crate. We can run each test in a separate process, effectively isolating it from the rest, stopping failures from spreading.
Let's take a look at an example from [`bakare`](https://git.sr.ht/~cyplo/bakare) - my experiment in writing a backup system.