RustBrock/Writing_Tests.md
2025-02-10 17:29:19 -07:00

27 KiB

Writing Tests

Tests are Rust functions that verify that non-test code is functioning as expected

The bodies of test functions typically perform these three actions:

  • Set up any needed data or state
  • Run the code you want to test
  • Assert that the results are what yo expect

Lets look at the features Rust provides specifically for writing tests that take these actions.

This includes the test attribute, a few macros, and the should_panic attribute

The Anatomy of a Test Function

A test in Rus is a function that is annotated with the test attribute.

Attributes are metadata about pieces of Rust code

An example of this that we used in the past was the derive attribute which was used with structs in Ch5(Using Structs to Structure Related Data)

To change a function into a test function add #[test] on the line before fn

When you run your tests with the cargo test command

Rust will then builds a test runner binary that runs the annotated functions and reports whether each test function passes or fails.

Whenever we make a new library project with Cargo, a test module with a test function with a test function in it is automatically generated for us

This module give a template for writing your tests.

This is great because you don't have to look up the syntax and structure every time you start a new project.

You can add as many additional test functions and as many modules as you want using that generated template

Find an example of this in the library/directory adder

Lets create the function add

Here is how to create a new library use this command

$ cargo new adder --lib
     Created library `adder` project
$ cd adder

Here is what adder library should look like/ the generated code

pub fn add(left: usize, right: usize) -> usize {
    left + right
}

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn it_works() {
        let result = add(2, 2);
        assert_eq!(result, 4);
    }
}

Lets focus on the it_works function.

Note the #[test] annotation; this attribute indicates this is a test function, so that the test runner knows how to treat this function as a test

We could also have non-test functions in the tests module to help set up common scenarios or perform common operations, so we always need a way to indicate which are tests

This test example uses the assert_eq! macro to assert that result, which contains the result of adding 2 and 2, equlas 4.

This servers as an example of the format of a typical test

After the cargo test command runs all tests in our project Here is the result

$ cargo test
   Compiling adder v0.1.0 (file:///projects/adder)
    Finished `test` profile [unoptimized + debuginfo] target(s) in 0.57s
     Running unittests src/lib.rs (target/debug/deps/adder-92948b65e88960b4)

running 1 test
test tests::it_works ... ok

test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

   Doc-tests adder

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

We can see that cargo compiled then ran the test.

We can see first running 1 test, then it shows the function name ``tests::it_worksand then the result of running that test isok`

It then shows the overall summary test result: ok. which means that all tests passed, and the portion that reads 1 passed; 0 failed totals th number of tests that passed or failed

It's possible to mark a test as ignored so it doesn't run in a particular instance, that will be overed later in this ch

Because we havent it also shows 0 ignored

The 0 measured statistic is for benchmark tests that measure performance.

Benchmark tests are only avaialbe in nightly Rust, at the time of 2023 or the writing of the rust programmign language book 2023

The next part of the test starts with Doc-tests adder is the results of any documentation tests.

This example does not have any document tests, but Rust can compile any code examples that appear in our API documentation.

This helps keep your docs and code in sync, this will be covered in ch14.

Lets start customizing the test to fit our needs

First we will change the name of the function from it_works to exploration

pub fn add(left: usize, right: usize) -> usize {
    left + right
}

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn exploration() {
        let result = add(2, 2);
        assert_eq!(result, 4);
    }
}

This changes the result of cargo test. The output now shows exploration instead of it_works

$ cargo test
   Compiling adder v0.1.0 (file:///projects/adder)
    Finished `test` profile [unoptimized + debuginfo] target(s) in 0.59s
     Running unittests src/lib.rs (target/debug/deps/adder-92948b65e88960b4)

running 1 test
test tests::exploration ... ok

test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

   Doc-tests adder

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

Now lets add another test. But this time lets make it fail explicitly.

Tests fail when something in the test function panics.

Each test us run in a new thread and when the main thread sees that a test thread has died the test associated with the thread is marked as failed.

The simplest way to cause a panic is by calling the panic! macro. Here is the updated version with another test function called another

pub fn add(left: usize, right: usize) -> usize {
    left + right
}

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn exploration() {
        let result = add(2, 2);
        assert_eq!(result, 4);
    }

    #[test]
    fn another() {
        panic!("Make this test fail");
    }
}

Here is the output after running cargo test This shows that exploration passes and another fails

$ cargo test
   Compiling adder v0.1.0 (file:///projects/adder)
    Finished `test` profile [unoptimized + debuginfo] target(s) in 0.72s
     Running unittests src/lib.rs (target/debug/deps/adder-92948b65e88960b4)

running 2 tests
test tests::another ... FAILED
test tests::exploration ... ok

failures:

---- tests::another stdout ----
thread 'tests::another' panicked at src/lib.rs:17:9:
Make this test fail
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace


failures:
    tests::another

test result: FAILED. 1 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

error: test failed, to rerun pass `--lib`

instead of ok the line test tests::another shows FAILED

two new sections appear in between the individual results and the summary.

The first section displays the detailed reason for each test failure.

In this case it displays that another failed because it panicked at 'Make this test fail on line 17 in the scr/lib.rs file

The next section lists just the names of all the failing tests, which is useful when there are lots of tests and lots of detailed failing test output. We can use the name of a failing test to run just that test to more easily debug it.

This will be covered in the Controlling Tests section

The summary line at the end displays the overall, with our test result which is FAILED in this case because we has one test pass and one test fail

Lets look at some other macros other than panic! that are useful in tests

Checking Results with the assert! Macro

The assert! macro, provided by the std library, is useful when you want to ensure that some condition evaluates to true

We give the assert! macro an argument that evaluates to a Bool

  • If the value is true then nothing happens and the test passes
  • If the value is false then the assert! macro calls panic! to cause the test to fail Using the assert! macro helps us check that our code is functioning in the way we intend

Here is an example using the code below, then testing against it and using the assert! macro to verify that it works in one case

#[derive(Debug)]
struct Rectangle {
    width: u32,
    height: u32,
}

impl Rectangle {
    fn can_hold(&self, other: &Rectangle) -> bool {
        self.width > other.width && self.height > other.height
    }
}
#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn larger_can_hold_smaller() {
        let larger = Rectangle {
            width: 8,
            height: 7,
        };
        let smaller = Rectangle {
            width: 5,
            height: 1,
        };

        assert!(larger.can_hold(&smaller));
    }
}

Note the use super::*; line in the tests module.

The tests module is a regular module that follows the usual visibility rules that was covered in The Module Tree in Modules and Use

This is because the tests module is an inner module, so we need to bring the code under test in the outer module into the scope of the inner module.

Here we use the glob here so anything we define in the outer module is available to this tests module.

In our test named larger_can_hold_smaller we created two Rectangle instances. We then call the assert! macro and passed the result of calling larger.can_hold(&smaller).

This expression should return true so our test should passed

$ cargo test
   Compiling rectangle v0.1.0 (file:///projects/rectangle)
    Finished `test` profile [unoptimized + debuginfo] target(s) in 0.66s
     Running unittests src/lib.rs (target/debug/deps/rectangle-6584c4561e48942e)

running 1 test
test tests::larger_can_hold_smaller ... ok

test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

   Doc-tests rectangle

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

It does in this case

Here is how you would add additional tests such as smaller cannot hold larger

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn larger_can_hold_smaller() {
        // --snip--
    }

    #[test]
    fn smaller_cannot_hold_larger() {
        let larger = Rectangle {
            width: 8,
            height: 7,
        };
        let smaller = Rectangle {
            width: 5,
            height: 1,
        };

        assert!(!smaller.can_hold(&larger));
    }
}

Because we expect it to return false we need to use the negative of that so that we can pass the test as intended.

$ cargo test
   Compiling rectangle v0.1.0 (file:///projects/rectangle)
    Finished `test` profile [unoptimized + debuginfo] target(s) in 0.66s
     Running unittests src/lib.rs (target/debug/deps/rectangle-6584c4561e48942e)

running 2 tests
test tests::larger_can_hold_smaller ... ok
test tests::smaller_cannot_hold_larger ... ok

test result: ok. 2 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

   Doc-tests rectangle

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

Now two tests pass, now lets see what happens when we introduce a bug

Here is the BUGGED version of can_hold

// --snip--
impl Rectangle {
    fn can_hold(&self, other: &Rectangle) -> bool {
        self.width < other.width && self.height > other.height
    }
}

Now with the BUGGED version here is the output

$ cargo test
   Compiling rectangle v0.1.0 (file:///projects/rectangle)
    Finished `test` profile [unoptimized + debuginfo] target(s) in 0.66s
     Running unittests src/lib.rs (target/debug/deps/rectangle-6584c4561e48942e)

running 2 tests
test tests::larger_can_hold_smaller ... FAILED
test tests::smaller_cannot_hold_larger ... ok

failures:

---- tests::larger_can_hold_smaller stdout ----
thread 'tests::larger_can_hold_smaller' panicked at src/lib.rs:28:9:
assertion failed: larger.can_hold(&smaller)
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace


failures:
    tests::larger_can_hold_smaller

test result: FAILED. 1 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

error: test failed, to rerun pass `--lib`

Our test caught the bug! The bug was caused by lager.width is 8 and smaller.width is 5. The comparison of the widths in can_hold now returns false; 8 is not less than 5.

Testing Equality with the assert_eq! and assert_ne! Macros

A very common way to verify functionality is to test for equality between the result of the code under test and the value you expect the code to return.

You could do this with the assert! macro and passing in the == operator but since this is so common rust provides a pair of macros - assert_eq! and assert_ne! to do this more conveniently

This two macros compare two arguments for equality or inequality, respectively

They will also print the two values if the assertion fails, which makes it easier to see why the test failed.

On the other hand assert! macro only indicates that it got a false value for the == expression without print the values that led to the false value

Here is an example use

pub fn add_two(a: usize) -> usize {
    a + 2
}

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn it_adds_two() {
        let result = add_two(2);
        assert_eq!(result, 4);
    }
}

Here is the output

$ cargo test
   Compiling adder v0.1.0 (file:///projects/adder)
    Finished `test` profile [unoptimized + debuginfo] target(s) in 0.58s
     Running unittests src/lib.rs (target/debug/deps/adder-92948b65e88960b4)

running 1 test
test tests::it_adds_two ... ok

test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

   Doc-tests adder

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

As you can see it passes

Here is a version with a BUG introduced

pub fn add_two(a: usize) -> usize {
    a + 3
}

Here is the output of the tests

$ cargo test
   Compiling adder v0.1.0 (file:///projects/adder)
    Finished `test` profile [unoptimized + debuginfo] target(s) in 0.61s
     Running unittests src/lib.rs (target/debug/deps/adder-92948b65e88960b4)

running 1 test
test tests::it_adds_two ... FAILED

failures:

---- tests::it_adds_two stdout ----
thread 'tests::it_adds_two' panicked at src/lib.rs:12:9:
assertion `left == right` failed
  left: 5
 right: 4
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace


failures:
    tests::it_adds_two

test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

error: test failed, to rerun pass `--lib`

Our test cause this bug as well.

The it_adds_two tests failed and the message tells us assertion 'left == right' failed then followed by what the left and right values are.

As we can see the code produces 5 when its expected to add 2 to 2 which should produce 4

This can see that this would be especially helpful when we have lots of tests going on.

Note that in some languages and test frameworks, the parameters to equality assertion functions are called expected and actual and the order in which we specify the arguments matters.

In Rust they are called left and right, and the order in which we specify the value we expect and the value the code produces doesn't matter

We could write the assertion in this test as assert_eq!(4, result), which produce the same failure message that displays assertion failed: '(left == right)'.

The assert_ne! macro will pass if the two values are not equal and if if they are equal.

This is useful when you don't know what the expected value is supposed to be but we do know what it is definitely shouldn't be

For example lets say we have a function that guarantees to change the input value to be a different value.

In this case the most appropriate assert would be assert_ne! because you want to test that ensures that the input is not the output.

The assert_eq! and assert_ne! macros use the == and != under the hood.

When these assertions fail, these macros print their arguments using the debug formatting, which measured the values being compared must implement the PartialEq and Debug traits.

All primitives types and most of the std library types implement these traits.

For your own structs and enums that you define you will need to implement the PartialEq to assert equality of those types. You must also implement the Debug to print the values when the assertion fails.

Because both traits are derivable traits , this means that usually it is as straightforward as adding the #[derive(PartialEq, Debug)] annotation to your struct or enum definition.

See Appendix C, Derivable Traits for more info about these and other derivable traits

Adding Custom Failure Messages

You can also add a custom message to be printed with the failure message as optional arguments to assert!, assert_eq! and assert_ne! macros

Any arguments specified after the required arguments are passed along to the format! macro, so you can pass a format string that contains {} placeholders and values to go in those placeholders.

Custom messages are useful in documenting what an assertion means; when a test fails you will have an easier to know what the problem is.

Here is an example, here is a function that greets people by name and we want to test that the name we pass into the function appears in the output

pub fn greeting(name: &str) -> String {
    format!("Hello {name}!")
}

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn greeting_contains_name() {
        let result = greeting("Carol");
        assert!(result.contains("Carol"));
    }
}

The requirements for this program has not been agreed upon yet and we are fairly confident the Hello text will be replaced.

Since we don't want to check for the exact value to the value returned from the greeting function we will just assert that the output contains the text of the input parameter

Now lets introduce a BUG

pub fn greeting(name: &str) -> String {
    String::from("Hello!")
}

Here is the output after running the test

$ cargo test
   Compiling greeter v0.1.0 (file:///projects/greeter)
    Finished `test` profile [unoptimized + debuginfo] target(s) in 0.91s
     Running unittests src/lib.rs (target/debug/deps/greeter-170b942eb5bf5e3a)

running 1 test
test tests::greeting_contains_name ... FAILED

failures:

---- tests::greeting_contains_name stdout ----
thread 'tests::greeting_contains_name' panicked at src/lib.rs:12:9:
assertion failed: result.contains("Carol")
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace


failures:
    tests::greeting_contains_name

test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

error: test failed, to rerun pass `--lib`

The assertion failed and which line the assertion is on

A more useful failure message would print the value from the greeting function.

Here is an example of a custom message added to the test. Note that is is very similar to the format! macro.

    #[test]
    fn greeting_contains_name() {
        let result = greeting("Carol");
        assert!(
            result.contains("Carol"),
            "Greeting did not contain name, value was `{result}`"
        );
    }

Now here is the test again

$ cargo test
   Compiling greeter v0.1.0 (file:///projects/greeter)
    Finished `test` profile [unoptimized + debuginfo] target(s) in 0.93s
     Running unittests src/lib.rs (target/debug/deps/greeter-170b942eb5bf5e3a)

running 1 test
test tests::greeting_contains_name ... FAILED

failures:

---- tests::greeting_contains_name stdout ----
thread 'tests::greeting_contains_name' panicked at src/lib.rs:12:9:
Greeting did not contain name, value was `Hello!`
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace


failures:
    tests::greeting_contains_name

test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

error: test failed, to rerun pass `--lib`

We can now see the value we actually got in the test output, which would help us debug what happened instead of what we were expecting to happen.

Checking for Panics with should_panic

In addition to checking return values, it is important to check that our code handles error conditions as we expect.

For example consider the Guess type from before.

Other code that uses Guess depends on the guarantee that Guess instances will contain only values between 1 and 100.

We can write tests that ensures that attempting to create a Guess instance with a value outside that range panics

We do this by adding the attribute should_panic to our test function.

The test passes if the code inside the function panics and the test fails if the code inside the function doesn't panic

Here is the Guess implementation

pub struct Guess {
    value: i32,
}

impl Guess {
    pub fn new(value: i32) -> Guess {
        if value < 1 || value > 100 {
            panic!("Guess value must be between 1 and 100, got {value}.");
        }

        Guess { value }
    }
}

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    #[should_panic]
    fn greater_than_100() {
        Guess::new(200);
    }
}

The #[should_panic] attribute goes after the #[test] attribute and before the test function it applies to

Here is the results of cargo test

$ cargo test
   Compiling guessing_game v0.1.0 (file:///projects/guessing_game)
    Finished `test` profile [unoptimized + debuginfo] target(s) in 0.58s
     Running unittests src/lib.rs (target/debug/deps/guessing_game-57d70c3acb738f4d)

running 1 test
test tests::greater_than_100 - should panic ... ok

test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

   Doc-tests guessing_game

running 0 tests

test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

This passes

Here is the BUGGED version

// --snip--
impl Guess {
    pub fn new(value: i32) -> Guess {
        if value < 1 {
            panic!("Guess value must be between 1 and 100, got {value}.");
        }

        Guess { value }
    }
}

Here is the test result

$ cargo test
   Compiling guessing_game v0.1.0 (file:///projects/guessing_game)
    Finished `test` profile [unoptimized + debuginfo] target(s) in 0.62s
     Running unittests src/lib.rs (target/debug/deps/guessing_game-57d70c3acb738f4d)

running 1 test
test tests::greater_than_100 - should panic ... FAILED

failures:

---- tests::greater_than_100 stdout ----
note: test did not panic as expected

failures:
    tests::greater_than_100

test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

error: test failed, to rerun pass `--lib`

Tests that use should_panic can be imprecise because a test would panic even if the test panics for a different reason than the one we are expecting.

To make should_panic tests more precise we can add an optional expected parameter to the should_panic attribute.

The test harness will make sure that the failure message contains the provided test.

Here is an example where Guess::new function panics with two different panics depending on whether the value is too small or large.

// --snip--

impl Guess {
    pub fn new(value: i32) -> Guess {
        if value < 1 {
            panic!(
                "Guess value must be greater than or equal to 1, got {value}."
            );
        } else if value > 100 {
            panic!(
                "Guess value must be less than or equal to 100, got {value}."
            );
        }

        Guess { value }
    }
}

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    #[should_panic(expected = "less than or equal to 100")]
    fn greater_than_100() {
        Guess::new(200);
    }
}

This test would pass because the value we put in the should_panic attribute's expected parameter is a substring of the message that the Guess::new function panics with

We could have specified the entire panic message that we expect.

In this case it would be Guess value must be less than or equal to 100, got 200.

What you decide depends on how much of the panic message is unique or dynamic and how precise you want your test to be.

In this case this panic message ensures that the function executes the else if value > 100 case

To see what happen when a should_panic test with an expected message fails.

Here is a BUGGED version to demonstrate

        if value < 1 {
            panic!(
                "Guess value must be less than or equal to 100, got {value}."
            );
        } else if value > 100 {
            panic!(
                "Guess value must be greater than or equal to 1, got {value}."
            );
        }

Here is the test output

$ cargo test
   Compiling guessing_game v0.1.0 (file:///projects/guessing_game)
    Finished `test` profile [unoptimized + debuginfo] target(s) in 0.66s
     Running unittests src/lib.rs (target/debug/deps/guessing_game-57d70c3acb738f4d)

running 1 test
test tests::greater_than_100 - should panic ... FAILED

failures:

---- tests::greater_than_100 stdout ----
thread 'tests::greater_than_100' panicked at src/lib.rs:12:13:
Guess value must be greater than or equal to 1, got 200.
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
note: panic did not contain expected string
      panic message: `"Guess value must be greater than or equal to 1, got 200."`,
 expected substring: `"less than or equal to 100"`

failures:
    tests::greater_than_100

test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

error: test failed, to rerun pass `--lib`

This failure tells us that this test did panic as we expected, but the panic message did not include the expected string less than or equal to 100.

The panic message that we did get in this case was Guess value must be greater than or equal to 1, got 200

This now allows us to start figuring out where our bug is.

Using Result<T, E> in Tests

All of our tests so far panic when they fail.

We can also write tests that use Result<T, E>

Here is the test rewritten to use Result<T, E> and return an Err instead of panicking

    #[test]
    fn it_works() -> Result<(), String> {
        let result = add(2, 2);

        if result == 4 {
            Ok(())
        } else {
            Err(String::from("two plus two does not equal four"))
        }
    }

This function now returns Result<(), String> return type.

In the body of the function, rather than calling the assert_eq! macro, we return Ok(()) when the test passes and an Err with a String inside when the test fails

Writing tests to they return a Result<T, E> enables you to use the ? operator in the body of tests.

This is can be a convenient way to write tests that should fail if any operation within them returns an Err variant.

Note you cant use the #[should_panic] annotation on tests that use the Result<T, E>

To assert that an operation returns an Err variant, don't use the ? operator on the Result<T, E> value.

Instead use assert!(value.is_err())