mirror of
https://github.com/darkicewolf50/RustBrock.git
synced 2025-06-15 04:54:17 -06:00
finshed ch11.2
This commit is contained in:
parent
33b2893369
commit
1bed9a2f9b
17
.obsidian/workspace.json
vendored
17
.obsidian/workspace.json
vendored
@ -13,12 +13,12 @@
|
|||||||
"state": {
|
"state": {
|
||||||
"type": "markdown",
|
"type": "markdown",
|
||||||
"state": {
|
"state": {
|
||||||
"file": "How_to_Run.md",
|
"file": "Test Controls.md",
|
||||||
"mode": "source",
|
"mode": "source",
|
||||||
"source": false
|
"source": false
|
||||||
},
|
},
|
||||||
"icon": "lucide-file",
|
"icon": "lucide-file",
|
||||||
"title": "How_to_Run"
|
"title": "Test Controls"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -27,16 +27,15 @@
|
|||||||
"state": {
|
"state": {
|
||||||
"type": "markdown",
|
"type": "markdown",
|
||||||
"state": {
|
"state": {
|
||||||
"file": "How_to_Run.md",
|
"file": "Writing_Tests.md",
|
||||||
"mode": "source",
|
"mode": "source",
|
||||||
"source": false
|
"source": false
|
||||||
},
|
},
|
||||||
"icon": "lucide-file",
|
"icon": "lucide-file",
|
||||||
"title": "How_to_Run"
|
"title": "Writing_Tests"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
],
|
]
|
||||||
"currentTab": 1
|
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"direction": "vertical"
|
"direction": "vertical"
|
||||||
@ -179,11 +178,11 @@
|
|||||||
"command-palette:Open command palette": false
|
"command-palette:Open command palette": false
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"active": "53b36d00b704136e",
|
"active": "caf0233e624d6c1c",
|
||||||
"lastOpenFiles": [
|
"lastOpenFiles": [
|
||||||
"Writing_Tests.md",
|
|
||||||
"How_to_Run.md",
|
|
||||||
"Tests.md",
|
"Tests.md",
|
||||||
|
"Writing_Tests.md",
|
||||||
|
"Test Controls.md",
|
||||||
"Traits.md",
|
"Traits.md",
|
||||||
"Modules and Use.md",
|
"Modules and Use.md",
|
||||||
"Modules.md",
|
"Modules.md",
|
||||||
|
@ -1 +0,0 @@
|
|||||||
# Controlling Tests
|
|
318
Test Controls.md
Normal file
318
Test Controls.md
Normal file
@ -0,0 +1,318 @@
|
|||||||
|
# Controlling Tests
|
||||||
|
|
||||||
|
`cargo test` compiles your code in test mode then runs the resultant test binary in parallel and capturing the generated outputs, just like how `cargo run` compiles then runs the resultant code.
|
||||||
|
|
||||||
|
|
||||||
|
You can specify command line options to change this default behavior.
|
||||||
|
|
||||||
|
Some command line options that go into `cargo test` and some go to the resultant test binary
|
||||||
|
|
||||||
|
To separate these two arguments you list the arguments that go into `cargo test` followed by the separator `--` and then the ones that go to the test binary.
|
||||||
|
|
||||||
|
For example running `cargo test --help` displays the options you can use with `cargo test` and running `cargo test -- --help` displays the options you can use after the separator.
|
||||||
|
|
||||||
|
## Running Tests in Parallel or Consecutively
|
||||||
|
By default you run multiple tests that run in parallel using threads, which means they finish running faster and get feedback quicker.
|
||||||
|
|
||||||
|
You must ensure that your tests don't depend on each other or on any shared state, this includes a shared environment, with details such as working directory or environment variables.
|
||||||
|
|
||||||
|
One case where this would be a problem would be using a text file where the value is expected to something but another test changes that value which makes all other tests after invalid because the text file no longer contains the expected value
|
||||||
|
|
||||||
|
One solution to this to to make sure each test writes to a different life, the other solution is to run the tests one at a time.
|
||||||
|
|
||||||
|
If you dont want to run the tests in parallel or if you want more control over the number of threads used you can use the `--test-threads` flag and the number of threads you want to use to the test binary
|
||||||
|
|
||||||
|
Here is an example use
|
||||||
|
```bash
|
||||||
|
$ cargo test -- --test-threads=1
|
||||||
|
```
|
||||||
|
|
||||||
|
Here we have test the number of test threads to `1` which tells the program to not use any parallelism.
|
||||||
|
|
||||||
|
This is slower than running them in parallel, but the test won't interfere with each other if they share state.
|
||||||
|
|
||||||
|
## Showing Function Output
|
||||||
|
By default if a tests passes the Rust test library captures anything printed to the std output.
|
||||||
|
|
||||||
|
For example if we call `println!` in a test and the test passes we wont see the `println!` output in the terminal. But if the test fails we will see whatever was printed to the std output with the rest of the failure message.
|
||||||
|
|
||||||
|
For example in this the function prints the value of its parameter and returns 10 as well as a test that passes and a test that fails.
|
||||||
|
|
||||||
|
```rust
|
||||||
|
fn prints_and_returns_10(a: i32) -> i32 {
|
||||||
|
println!("I got the value {a}");
|
||||||
|
10
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn this_test_will_pass() {
|
||||||
|
let value = prints_and_returns_10(4);
|
||||||
|
assert_eq!(value, 10);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn this_test_will_fail() {
|
||||||
|
let value = prints_and_returns_10(8);
|
||||||
|
assert_eq!(value, 5);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Here is the output after running `cargo test`
|
||||||
|
```
|
||||||
|
$ cargo test
|
||||||
|
Compiling silly-function v0.1.0 (file:///projects/silly-function)
|
||||||
|
Finished `test` profile [unoptimized + debuginfo] target(s) in 0.58s
|
||||||
|
Running unittests src/lib.rs (target/debug/deps/silly_function-160869f38cff9166)
|
||||||
|
|
||||||
|
running 2 tests
|
||||||
|
test tests::this_test_will_fail ... FAILED
|
||||||
|
test tests::this_test_will_pass ... ok
|
||||||
|
|
||||||
|
failures:
|
||||||
|
|
||||||
|
---- tests::this_test_will_fail stdout ----
|
||||||
|
I got the value 8
|
||||||
|
thread 'tests::this_test_will_fail' panicked at src/lib.rs:19:9:
|
||||||
|
assertion `left == right` failed
|
||||||
|
left: 10
|
||||||
|
right: 5
|
||||||
|
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
|
||||||
|
|
||||||
|
|
||||||
|
failures:
|
||||||
|
tests::this_test_will_fail
|
||||||
|
|
||||||
|
test result: FAILED. 1 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
|
||||||
|
|
||||||
|
error: test failed, to rerun pass `--lib`
|
||||||
|
```
|
||||||
|
|
||||||
|
Note that we don't see `I got the value 4` in the output which is printed by the test that passes, that output has been captured.
|
||||||
|
|
||||||
|
The output from the test failed, `I got the value 8`, this appears in the section of the test summary output which also shows thew case of the test failure.
|
||||||
|
|
||||||
|
If we want to see printed values for the passing tests as well we can add the flag `--show-output`
|
||||||
|
|
||||||
|
Here is an example use
|
||||||
|
```bash
|
||||||
|
$ cargo test -- --show-output
|
||||||
|
```
|
||||||
|
If we run the test again with the `--show-output` flag, here is the output
|
||||||
|
```
|
||||||
|
$ cargo test -- --show-output
|
||||||
|
Compiling silly-function v0.1.0 (file:///projects/silly-function)
|
||||||
|
Finished `test` profile [unoptimized + debuginfo] target(s) in 0.60s
|
||||||
|
Running unittests src/lib.rs (target/debug/deps/silly_function-160869f38cff9166)
|
||||||
|
|
||||||
|
running 2 tests
|
||||||
|
test tests::this_test_will_fail ... FAILED
|
||||||
|
test tests::this_test_will_pass ... ok
|
||||||
|
|
||||||
|
successes:
|
||||||
|
|
||||||
|
---- tests::this_test_will_pass stdout ----
|
||||||
|
I got the value 4
|
||||||
|
|
||||||
|
|
||||||
|
successes:
|
||||||
|
tests::this_test_will_pass
|
||||||
|
|
||||||
|
failures:
|
||||||
|
|
||||||
|
---- tests::this_test_will_fail stdout ----
|
||||||
|
I got the value 8
|
||||||
|
thread 'tests::this_test_will_fail' panicked at src/lib.rs:19:9:
|
||||||
|
assertion `left == right` failed
|
||||||
|
left: 10
|
||||||
|
right: 5
|
||||||
|
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
|
||||||
|
|
||||||
|
|
||||||
|
failures:
|
||||||
|
tests::this_test_will_fail
|
||||||
|
|
||||||
|
test result: FAILED. 1 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
|
||||||
|
|
||||||
|
error: test failed, to rerun pass `--lib`
|
||||||
|
```
|
||||||
|
## Running a Subset of Tests by Name
|
||||||
|
Sometimes, running a full test suite can take a long time
|
||||||
|
|
||||||
|
If are working in a particular area, you might want to run tests pertaining to the code area.
|
||||||
|
|
||||||
|
This can be done by passing `cargo test` then the name or names of the test(s) you want to run as an argument
|
||||||
|
|
||||||
|
Here we will write 3 tests on our `add_two` function
|
||||||
|
```rust
|
||||||
|
pub fn add_two(a: usize) -> usize {
|
||||||
|
a + 2
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn add_two_and_two() {
|
||||||
|
let result = add_two(2);
|
||||||
|
assert_eq!(result, 4);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn add_three_and_two() {
|
||||||
|
let result = add_two(3);
|
||||||
|
assert_eq!(result, 5);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn one_hundred() {
|
||||||
|
let result = add_two(100);
|
||||||
|
assert_eq!(result, 102);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
If we run it with out any arguments here is the output
|
||||||
|
```
|
||||||
|
$ cargo test
|
||||||
|
Compiling adder v0.1.0 (file:///projects/adder)
|
||||||
|
Finished `test` profile [unoptimized + debuginfo] target(s) in 0.62s
|
||||||
|
Running unittests src/lib.rs (target/debug/deps/adder-92948b65e88960b4)
|
||||||
|
|
||||||
|
running 3 tests
|
||||||
|
test tests::add_three_and_two ... ok
|
||||||
|
test tests::add_two_and_two ... ok
|
||||||
|
test tests::one_hundred ... ok
|
||||||
|
|
||||||
|
test result: ok. 3 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
|
||||||
|
|
||||||
|
Doc-tests adder
|
||||||
|
|
||||||
|
running 0 tests
|
||||||
|
|
||||||
|
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
|
||||||
|
```
|
||||||
|
|
||||||
|
### Running Single Tests
|
||||||
|
We can pass the name of any test function to `cargo test` to only run that test
|
||||||
|
|
||||||
|
Here is the the output
|
||||||
|
```
|
||||||
|
$ cargo test one_hundred
|
||||||
|
Compiling adder v0.1.0 (file:///projects/adder)
|
||||||
|
Finished `test` profile [unoptimized + debuginfo] target(s) in 0.69s
|
||||||
|
Running unittests src/lib.rs (target/debug/deps/adder-92948b65e88960b4)
|
||||||
|
|
||||||
|
running 1 test
|
||||||
|
test tests::one_hundred ... ok
|
||||||
|
|
||||||
|
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 2 filtered out; finished in 0.00s
|
||||||
|
```
|
||||||
|
|
||||||
|
Note that `one_hundred` ran and outputted.
|
||||||
|
|
||||||
|
The other 2 tests didn't run as the other two didn't match that name.
|
||||||
|
|
||||||
|
The test lets us know how many tests we didn't run by displaying `2 filtered out` at the end
|
||||||
|
|
||||||
|
We can't specify the names of multiple tests this way, only the first value given to `cargo test` will be used
|
||||||
|
### Filtering to Run Multiple Tests
|
||||||
|
We can specify part of a tests name, and any test whose name matches that value will run
|
||||||
|
|
||||||
|
Here is an the output where we use `add` tow run the two tests who's name contains `add` in them
|
||||||
|
```
|
||||||
|
$ cargo test add
|
||||||
|
Compiling adder v0.1.0 (file:///projects/adder)
|
||||||
|
Finished `test` profile [unoptimized + debuginfo] target(s) in 0.61s
|
||||||
|
Running unittests src/lib.rs (target/debug/deps/adder-92948b65e88960b4)
|
||||||
|
|
||||||
|
running 2 tests
|
||||||
|
test tests::add_three_and_two ... ok
|
||||||
|
test tests::add_two_and_two ... ok
|
||||||
|
|
||||||
|
test result: ok. 2 passed; 0 failed; 0 ignored; 0 measured; 1 filtered out; finished in 0.00s
|
||||||
|
```
|
||||||
|
Notice that this tests ran all tests wit `add` in the name and filtered out the test named `one_hundred`
|
||||||
|
|
||||||
|
Also note that the module in which a test appears becomes part of the test's name, so we can run all the tests in a module by filtering on the module's name.
|
||||||
|
|
||||||
|
### Ignoring Some Tests Unless Specifically Requested
|
||||||
|
Sometimes a few specific tests can be very time-consuming to execute, so you may choose to exclude those during most runs of `cargo test`
|
||||||
|
|
||||||
|
Rather than listing as arguments all tests you do want to run, instead you can annotate the time consuming tests using the `ignore` attribute to exclude them
|
||||||
|
|
||||||
|
Here is an example of this
|
||||||
|
```rust
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn it_works() {
|
||||||
|
let result = add(2, 2);
|
||||||
|
assert_eq!(result, 4);
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
#[ignore]
|
||||||
|
fn expensive_test() {
|
||||||
|
// code that takes an hour to run
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
After `#[test]` we add the `#[ignore]` line to the test we want to exclude
|
||||||
|
|
||||||
|
Now when we run our tests `it_works` runs but `expensive_test` doesn't
|
||||||
|
|
||||||
|
Here is the output
|
||||||
|
```
|
||||||
|
$ cargo test
|
||||||
|
Compiling adder v0.1.0 (file:///projects/adder)
|
||||||
|
Finished `test` profile [unoptimized + debuginfo] target(s) in 0.60s
|
||||||
|
Running unittests src/lib.rs (target/debug/deps/adder-92948b65e88960b4)
|
||||||
|
|
||||||
|
running 2 tests
|
||||||
|
test tests::expensive_test ... ignored
|
||||||
|
test tests::it_works ... ok
|
||||||
|
|
||||||
|
test result: ok. 1 passed; 0 failed; 1 ignored; 0 measured; 0 filtered out; finished in 0.00s
|
||||||
|
|
||||||
|
Doc-tests adder
|
||||||
|
|
||||||
|
running 0 tests
|
||||||
|
|
||||||
|
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
|
||||||
|
```
|
||||||
|
|
||||||
|
The `expensive_test` function is listed as `ignored`
|
||||||
|
|
||||||
|
If we want to run only the ignored tests we can use `cargo test -- --ignored`
|
||||||
|
|
||||||
|
Here is the output after running that command
|
||||||
|
```
|
||||||
|
$ cargo test -- --ignored
|
||||||
|
Compiling adder v0.1.0 (file:///projects/adder)
|
||||||
|
Finished `test` profile [unoptimized + debuginfo] target(s) in 0.61s
|
||||||
|
Running unittests src/lib.rs (target/debug/deps/adder-92948b65e88960b4)
|
||||||
|
|
||||||
|
running 1 test
|
||||||
|
test expensive_test ... ok
|
||||||
|
|
||||||
|
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 1 filtered out; finished in 0.00s
|
||||||
|
|
||||||
|
Doc-tests adder
|
||||||
|
|
||||||
|
running 0 tests
|
||||||
|
|
||||||
|
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
|
||||||
|
```
|
||||||
|
|
||||||
|
By controlling which tests we can get feedback quickly
|
||||||
|
|
||||||
|
When you get to a point where it makes sense to check the results of the `ignored` tests and you have to wait for the results, you can run `cargo test -- --ignored` instead
|
||||||
|
|
||||||
|
If you want to run all tests regardless of if they are ignored or not you can run `cargo test -- --include-ignored`
|
2
Tests.md
2
Tests.md
@ -30,7 +30,7 @@ We can run these tests whenever we make changes to our code to make sure any exi
|
|||||||
|
|
||||||
This chapter will cover:
|
This chapter will cover:
|
||||||
- [How to Write Tests](Writing_Tests.md)
|
- [How to Write Tests](Writing_Tests.md)
|
||||||
- [Controlling How Tests Are Run](How_to_Run.md)
|
- [Controlling How Tests Are Run](Test%20Controls.md)
|
||||||
- [Test Organization](Test_Organization.md)
|
- [Test Organization](Test_Organization.md)
|
||||||
|
|
||||||
But it will also talk about Rust's testing facilities, the annotations and macros available to you when writing tests, the default behavior and options provided for running your tests and how to organize tests into unit tests and integration tests.
|
But it will also talk about Rust's testing facilities, the annotations and macros available to you when writing tests, the default behavior and options provided for running your tests and how to organize tests into unit tests and integration tests.
|
@ -215,7 +215,7 @@ In this case it displays that `another` failed because it `panicked at 'Make thi
|
|||||||
|
|
||||||
The next section lists just the names of all the failing tests, which is useful when there are lots of tests and lots of detailed failing test output. We can use the name of a failing test to run just that test to more easily debug it.
|
The next section lists just the names of all the failing tests, which is useful when there are lots of tests and lots of detailed failing test output. We can use the name of a failing test to run just that test to more easily debug it.
|
||||||
|
|
||||||
This will be covered in the [`Controlling Tests`](How_to_Run.md) section
|
This will be covered in the [`Controlling Tests`](Test%20Controls.md) section
|
||||||
|
|
||||||
The summary line at the end displays the overall, with our test result which is `FAILED` in this case because we has one test pass and one test fail
|
The summary line at the end displays the overall, with our test result which is `FAILED` in this case because we has one test pass and one test fail
|
||||||
|
|
||||||
|
Loading…
x
Reference in New Issue
Block a user