RustBrock/Test Controls.md
2025-02-11 14:41:45 -07:00

10 KiB

Controlling Tests

cargo test compiles your code in test mode then runs the resultant test binary in parallel and capturing the generated outputs, just like how cargo run compiles then runs the resultant code.

You can specify command line options to change this default behavior.

Some command line options that go into cargo test and some go to the resultant test binary

To separate these two arguments you list the arguments that go into cargo test followed by the separator -- and then the ones that go to the test binary.

For example running cargo test --help displays the options you can use with cargo test and running cargo test -- --help displays the options you can use after the separator.

Running Tests in Parallel or Consecutively

By default you run multiple tests that run in parallel using threads, which means they finish running faster and get feedback quicker.

You must ensure that your tests don't depend on each other or on any shared state, this includes a shared environment, with details such as working directory or environment variables.

One case where this would be a problem would be using a text file where the value is expected to something but another test changes that value which makes all other tests after invalid because the text file no longer contains the expected value

One solution to this to to make sure each test writes to a different life, the other solution is to run the tests one at a time.

If you dont want to run the tests in parallel or if you want more control over the number of threads used you can use the --test-threads flag and the number of threads you want to use to the test binary

Here is an example use

$ cargo test -- --test-threads=1

Here we have test the number of test threads to 1 which tells the program to not use any parallelism.

This is slower than running them in parallel, but the test won't interfere with each other if they share state.

Showing Function Output

By default if a tests passes the Rust test library captures anything printed to the std output.

For example if we call println! in a test and the test passes we wont see the println! output in the terminal. But if the test fails we will see whatever was printed to the std output with the rest of the failure message.

For example in this the function prints the value of its parameter and returns 10 as well as a test that passes and a test that fails.

fn prints_and_returns_10(a: i32) -> i32 {
	println!("I got the value {a}");
	10
}

#[cfg(test)]
mod tests {
	use super::*;

	#[test]
	fn this_test_will_pass() {
		let value = prints_and_returns_10(4);
		assert_eq!(value, 10);
	}

	#[test]
	fn this_test_will_fail() {
		let value = prints_and_returns_10(8);
		assert_eq!(value, 5);
	}
}

Here is the output after running cargo test

$ cargo test
   Compiling silly-function v0.1.0 (file:///projects/silly-function)
    Finished `test` profile [unoptimized + debuginfo] target(s) in 0.58s
     Running unittests src/lib.rs (target/debug/deps/silly_function-160869f38cff9166)

running 2 tests
test tests::this_test_will_fail ... FAILED
test tests::this_test_will_pass ... ok

failures:

---- tests::this_test_will_fail stdout ----
I got the value 8
thread 'tests::this_test_will_fail' panicked at src/lib.rs:19:9:
assertion `left == right` failed
  left: 10
 right: 5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace


failures:
    tests::this_test_will_fail

test result: FAILED. 1 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

error: test failed, to rerun pass `--lib`

Note that we don't see I got the value 4 in the output which is printed by the test that passes, that output has been captured.

The output from the test failed, I got the value 8, this appears in the section of the test summary output which also shows thew case of the test failure.

If we want to see printed values for the passing tests as well we can add the flag --show-output

Here is an example use

$ cargo test -- --show-output

If we run the test again with the --show-output flag, here is the output

$ cargo test -- --show-output
   Compiling silly-function v0.1.0 (file:///projects/silly-function)
    Finished `test` profile [unoptimized + debuginfo] target(s) in 0.60s
     Running unittests src/lib.rs (target/debug/deps/silly_function-160869f38cff9166)

running 2 tests
test tests::this_test_will_fail ... FAILED
test tests::this_test_will_pass ... ok

successes:

---- tests::this_test_will_pass stdout ----
I got the value 4


successes:
    tests::this_test_will_pass

failures:

---- tests::this_test_will_fail stdout ----
I got the value 8
thread 'tests::this_test_will_fail' panicked at src/lib.rs:19:9:
assertion `left == right` failed
  left: 10
 right: 5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace


failures:
    tests::this_test_will_fail

test result: FAILED. 1 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

error: test failed, to rerun pass `--lib`

Running a Subset of Tests by Name

Sometimes, running a full test suite can take a long time

If are working in a particular area, you might want to run tests pertaining to the code area.

This can be done by passing cargo test then the name or names of the test(s) you want to run as an argument

Here we will write 3 tests on our add_two function

pub fn add_two(a: usize) -> usize {
    a + 2
}

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn add_two_and_two() {
        let result = add_two(2);
        assert_eq!(result, 4);
    }

    #[test]
    fn add_three_and_two() {
        let result = add_two(3);
        assert_eq!(result, 5);
    }

    #[test]
    fn one_hundred() {
        let result = add_two(100);
        assert_eq!(result, 102);
    }
}

If we run it with out any arguments here is the output

$ cargo test
   Compiling adder v0.1.0 (file:///projects/adder)
    Finished `test` profile [unoptimized + debuginfo] target(s) in 0.62s
     Running unittests src/lib.rs (target/debug/deps/adder-92948b65e88960b4)

running 3 tests
test tests::add_three_and_two ... ok
test tests::add_two_and_two ... ok
test tests::one_hundred ... ok

test result: ok. 3 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

   Doc-tests adder

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

Running Single Tests

We can pass the name of any test function to cargo test to only run that test

Here is the the output

$ cargo test one_hundred
   Compiling adder v0.1.0 (file:///projects/adder)
    Finished `test` profile [unoptimized + debuginfo] target(s) in 0.69s
     Running unittests src/lib.rs (target/debug/deps/adder-92948b65e88960b4)

running 1 test
test tests::one_hundred ... ok

test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 2 filtered out; finished in 0.00s

Note that one_hundred ran and outputted.

The other 2 tests didn't run as the other two didn't match that name.

The test lets us know how many tests we didn't run by displaying 2 filtered out at the end

We can't specify the names of multiple tests this way, only the first value given to cargo test will be used

Filtering to Run Multiple Tests

We can specify part of a tests name, and any test whose name matches that value will run

Here is an the output where we use add tow run the two tests who's name contains add in them

$ cargo test add
   Compiling adder v0.1.0 (file:///projects/adder)
    Finished `test` profile [unoptimized + debuginfo] target(s) in 0.61s
     Running unittests src/lib.rs (target/debug/deps/adder-92948b65e88960b4)

running 2 tests
test tests::add_three_and_two ... ok
test tests::add_two_and_two ... ok

test result: ok. 2 passed; 0 failed; 0 ignored; 0 measured; 1 filtered out; finished in 0.00s

Notice that this tests ran all tests wit add in the name and filtered out the test named one_hundred

Also note that the module in which a test appears becomes part of the test's name, so we can run all the tests in a module by filtering on the module's name.

Ignoring Some Tests Unless Specifically Requested

Sometimes a few specific tests can be very time-consuming to execute, so you may choose to exclude those during most runs of cargo test

Rather than listing as arguments all tests you do want to run, instead you can annotate the time consuming tests using the ignore attribute to exclude them

Here is an example of this

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn it_works() {
        let result = add(2, 2);
        assert_eq!(result, 4);
    }

    #[test]
    #[ignore]
    fn expensive_test() {
        // code that takes an hour to run
    }
}

After #[test] we add the #[ignore] line to the test we want to exclude

Now when we run our tests it_works runs but expensive_test doesn't

Here is the output

$ cargo test
   Compiling adder v0.1.0 (file:///projects/adder)
    Finished `test` profile [unoptimized + debuginfo] target(s) in 0.60s
     Running unittests src/lib.rs (target/debug/deps/adder-92948b65e88960b4)

running 2 tests
test tests::expensive_test ... ignored
test tests::it_works ... ok

test result: ok. 1 passed; 0 failed; 1 ignored; 0 measured; 0 filtered out; finished in 0.00s

   Doc-tests adder

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

The expensive_test function is listed as ignored

If we want to run only the ignored tests we can use cargo test -- --ignored

Here is the output after running that command

$ cargo test -- --ignored
   Compiling adder v0.1.0 (file:///projects/adder)
    Finished `test` profile [unoptimized + debuginfo] target(s) in 0.61s
     Running unittests src/lib.rs (target/debug/deps/adder-92948b65e88960b4)

running 1 test
test expensive_test ... ok

test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 1 filtered out; finished in 0.00s

   Doc-tests adder

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

By controlling which tests we can get feedback quickly

When you get to a point where it makes sense to check the results of the ignored tests and you have to wait for the results, you can run cargo test -- --ignored instead

If you want to run all tests regardless of if they are ignored or not you can run cargo test -- --include-ignored