finshed ch17.3
Some checks failed
Test Gitea Actions / first (push) Successful in 15s
Test Gitea Actions / check-code (push) Failing after 15s
Test Gitea Actions / test (push) Has been skipped
Test Gitea Actions / documentation-check (push) Has been skipped

This commit is contained in:
darkicewolf50 2025-03-25 11:56:31 -06:00
parent c73d875808
commit 455516765e
3 changed files with 366 additions and 4 deletions

View File

@ -63,6 +63,20 @@
"title": "Any Number of Futures" "title": "Any Number of Futures"
} }
}, },
{
"id": "8d868fd701da33a8",
"type": "leaf",
"state": {
"type": "markdown",
"state": {
"file": "Futures in Sequence.md",
"mode": "source",
"source": false
},
"icon": "lucide-file",
"title": "Futures in Sequence"
}
},
{ {
"id": "2a974ca5442d705f", "id": "2a974ca5442d705f",
"type": "leaf", "type": "leaf",
@ -116,7 +130,7 @@
} }
} }
], ],
"currentTab": 3 "currentTab": 4
} }
], ],
"direction": "vertical" "direction": "vertical"
@ -259,8 +273,10 @@
"command-palette:Open command palette": false "command-palette:Open command palette": false
} }
}, },
"active": "6142f6650517896f", "active": "8d868fd701da33a8",
"lastOpenFiles": [ "lastOpenFiles": [
"Any Number of Futures.md",
"Futures in Sequence.md",
"Futures and Async.md", "Futures and Async.md",
"Async, Await, Futures and Streams.md", "Async, Await, Futures and Streams.md",
"Concurrency.md", "Concurrency.md",
@ -286,8 +302,6 @@
"Traits.md", "Traits.md",
"Modules and Use.md", "Modules and Use.md",
"Modules.md", "Modules.md",
"Generic Types Traits and Lifetimes.md",
"Generics.md",
"does_not_compile.svg", "does_not_compile.svg",
"Untitled.canvas", "Untitled.canvas",
"Good and Bad Code/Commenting Pratices", "Good and Bad Code/Commenting Pratices",

View File

@ -416,3 +416,350 @@ If you have long-running blocking operations, async can be a useful tool for pro
*How* would you hand control back to the runtime in those cases? *How* would you hand control back to the runtime in those cases?
## Yielding Control to the Runtime ## Yielding Control to the Runtime
Lets simulate a long running operation
```rust
fn slow(name: &str, ms: u64) {
thread::sleep(Duration::from_millis(ms));
println!("'{name}' ran for {ms}ms");
}
```
The code uses `std::thread::sleep` instead of `trpl::sleep` so that calling `slow` will block the current thread for a number of milliseconds.
We then can use `slow` to stand in for real-world operations that are both long running and blocking.
Here we use `slow` to emulate doing a CPU-bound work in a pair of futures
```rust
let a = async {
println!("'a' started.");
slow("a", 30);
slow("a", 10);
slow("a", 20);
trpl::sleep(Duration::from_millis(50)).await;
println!("'a' finished.");
};
let b = async {
println!("'b' started.");
slow("b", 75);
slow("b", 10);
slow("b", 15);
slow("b", 350);
trpl::sleep(Duration::from_millis(50)).await;
println!("'b' finished.");
};
trpl::race(a, b).await;
```
To begin, each future only hands control back to the runtime *after* carrying out a bunch of slow operations.
You see this output form this code
```
'a' started.
'a' ran for 30ms
'a' ran for 10ms
'a' ran for 20ms
'b' started.
'b' ran for 75ms
'b' ran for 10ms
'b' ran for 15ms
'b' ran for 350ms
'a' finished.
```
As with the earlier example, `race` still finishes as soon as `a` is done.
Notice that there is no interleaving between the two futures.
The `a` future does all of its work until the `trpl::sleep` is awaited, then the `b` future does all of its work until its `trpl::sleep` is awaited.
Finally the `a` future completes.
To allow both futures to make progress between their slow tasks we need to add await points so we can hand control back to the runtime.
This means we need something to await.
If we removed the `trpl::sleep` at the end of the `a` future, it would complete without the `b` future running *at all*.
Now we will try using `sleep` as a starting point for letting operations switch off making progress.
```rust
let one_ms = Duration::from_millis(1);
let a = async {
println!("'a' started.");
slow("a", 30);
trpl::sleep(one_ms).await;
slow("a", 10);
trpl::sleep(one_ms).await;
slow("a", 20);
trpl::sleep(one_ms).await;
println!("'a' finished.");
};
let b = async {
println!("'b' started.");
slow("b", 75);
trpl::sleep(one_ms).await;
slow("b", 10);
trpl::sleep(one_ms).await;
slow("b", 15);
trpl::sleep(one_ms).await;
slow("b", 35);
trpl::sleep(one_ms).await;
println!("'b' finished.");
};
```
Here we added `trpl::sleep` calls with await points between each call to `slow`.
Now thew futures' work is interleaved.
```
'a' started.
'a' ran for 30ms
'b' started.
'b' ran for 75ms
'a' ran for 10ms
'b' ran for 10ms
'a' ran for 20ms
'b' ran for 15ms
'a' finished.
```
Here the `a` future runs for a but before handing off control to `b` because it calls `slow` before ever calling `trpl::sleep`.
After that the futures swap back and forth each time one of them hits and await point.
In this case we do this after every call to slow, we could also break up the work in whatever way makes most sense to us.
We don't actually want to *sleep* here, but we want to switch control.
We want to make progress as fast as we can.
We need a way to hand control back to the runtime.
We can do this directly with the `yield_now` function.
In this example we replace all of the `sleep` calls with `yield_now`
```rust
let a = async {
println!("'a' started.");
slow("a", 30);
trpl::yield_now().await;
slow("a", 10);
trpl::yield_now().await;
slow("a", 20);
trpl::yield_now().await;
println!("'a' finished.");
};
let b = async {
println!("'b' started.");
slow("b", 75);
trpl::yield_now().await;
slow("b", 10);
trpl::yield_now().await;
slow("b", 15);
trpl::yield_now().await;
slow("b", 35);
trpl::yield_now().await;
println!("'b' finished.");
};
```
This is now much clearer about the intent and can be significantly faster than using `sleep`.
Due to timers such as the one used by `sleep` often have limits on how granular they can be.
The version of `sleep` we use for example we always sleep for at least a millisecond, even if we pass it a `Duration` of one nanosecond.
Modern computers are *fast*, and they can do a lot in one millisecond.
This will be demonstrated by setting up a little benchmark.
Note: This isnt an especially rigorous way to do performance testing, but it suffices to show the difference here.
```rust
extern crate trpl; // required for mdbook test
use std::time::{Duration, Instant};
fn main() {
trpl::run(async {
let one_ns = Duration::from_nanos(1);
let start = Instant::now();
async {
for _ in 1..1000 {
trpl::sleep(one_ns).await;
}
}
.await;
let time = Instant::now() - start;
println!(
"'sleep' version finished after {} seconds.",
time.as_secs_f32()
);
let start = Instant::now();
async {
for _ in 1..1000 {
trpl::yield_now().await;
}
}
.await;
let time = Instant::now() - start;
println!(
"'yield' version finished after {} seconds.",
time.as_secs_f32()
);
});
}
```
Here we skip all of the status printing, pass a one-nanosecond `Duration` to `trpl::sleep` and then let each future run by itself without switching between futures.
Then we run for 1,000 iterations and see how long the future using `trpl::sleep` takes compared to the future using `trpl::yield_now` takes
The version using `yeild_now` is *way* faster.
This means that async can be useful even for CPU bound tasks (depending on what your program does/doing).
This provides a useful tool for structuring the relationships between different parts of the program.
This is a form of *cooperative multitasking*, where each future has the power to determine when it hands control over via await points.
This also means that each future also has the responsibility to avoid blocking for too long.
In some Rust-based embedded operating systems, this is the *only* kind of multitasking.
In real-world apps, you usually won't be alternating function calls with await points on ever single line.
While yielding control in this way is relatively inexpensive, it is not free.
In many cases trying to break up a compute bound task might make it significantly slower.
Sometimes it is better for the *overall* performance to let an operation briefly.
You can always measure to see what your code's actual performance bottlenecks are.
The underlying dynamic is important to keep in mind.
If you *are* seeing a lot of work happening in serial that you expected to happen in happen concurrently
## Building Our Own Async Abstractions
We can also compose futures together to create new patterns.
For example we can build a `timeout` function with async building blocks we already have.
When we are done, the result will be another building block we could use to create still more async abstractions.
This shows how we would expect this `timeout` to work with a slow future.
```rust
extern crate trpl; // required for mdbook test
use std::time::Duration;
fn main() {
trpl::run(async {
let slow = async {
trpl::sleep(Duration::from_millis(100)).await;
"I finished!"
};
match timeout(slow, Duration::from_millis(10)).await {
Ok(message) => println!("Succeeded with '{message}'"),
Err(duration) => {
println!("Failed after {} seconds", duration.as_secs())
}
}
});
}
```
To implement this we will think about the API for `timeout`:
- It needs to be an async function itself so we can await it
- Its first parameter should be a future to run
- We can make it generic to allow it to work with any future
- The second parameter will be the maximum time to wait.
- If we use a `Duration`, this will make it easy to pass along to `trpl::sleep`
- It should return a `Result`
- If the future completes successfully, the `Result` will be `Ok` with the value produced by the future.
- If the timeout elapses first the `Result` will be `Err` with the duration that the timeout waited for.
Here is a declaration of this function
```rust
async fn timeout<F: Future>(
future_to_try: F,
max_time: Duration,
) -> Result<F::Output, Duration> {
// Here is where our implementation will go!
}
```
This satisfies the goals of our types.
Now we will focus on the *behavior* we need.
We want to race the future passed in against the duration.
We can will use `trpl::sleep` to make a timer future form the duration, and use `trpl::race` to run that timer with the future the caller passes in.
We know that `race` is not fair, polling arguments in the order which they are passed.
We pass `future_to_try` to `race` fist so it gets a chance to complete even if `max_time` is a very short duration.
If `future_to_try` finishes first, `race` will return `Left` with the output from `future_to_try`.
If `timer` finishes first, `race` will return `Right` with the timer's output of `()`.
Here we match on the result of awaiting `trpl::race`
```rust
extern crate trpl; // required for mdbook test
use std::{future::Future, time::Duration};
use trpl::Either;
// --snip--
fn main() {
trpl::run(async {
let slow = async {
trpl::sleep(Duration::from_secs(5)).await;
"Finally finished"
};
match timeout(slow, Duration::from_secs(2)).await {
Ok(message) => println!("Succeeded with '{message}'"),
Err(duration) => {
println!("Failed after {} seconds", duration.as_secs())
}
}
});
}
async fn timeout<F: Future>(
future_to_try: F,
max_time: Duration,
) -> Result<F::Output, Duration> {
match trpl::race(future_to_try, trpl::sleep(max_time)).await {
Either::Left(output) => Ok(output),
Either::Right(_) => Err(max_time),
}
}
```
If the `future_to_try` succeeds and we get a `Left(output)`, we return `Ok(output)`
If the sleep timer elapses we get a `Right(())`, we ignore the `()` with `_` and return `Err(max_time)` instead.
With this we have a working `timeout` built out of two other async helpers.
If we run the code, it will print the failure mode after the timeout
```
Failed after 2 seconds
```
Because futures compose with other futures, you can build really powerful tools suing smaller async blocks.
One example of this is the same approach to combine timeout with retries, and in turn use those with operations such as network calls (thus is one of the examples form the beginning of the chapter).
In practice, you usually work directly with `async` and `await` and secondarily with functions and macros like `join`, `join_all`, `race` and so on.
You only need to reach for `pin` to use futures with those APIs.
Some things to consider:
- We used a `Vec` with `join_all` to wait for all of the futures in some group to finish. How could you use a `Vec` to process a group of futures in sequence instead? What are the tradeoffs of doing that?
- Take a look at the `futures::stream::FuturesUnordered` type from the `futures` crate. How would using it be different from using a `Vec`? (Dont worry about the fact that its from the `stream` part of the crate; it works just fine with any collection of futures.)
Next we will look at working with multiple futures in a sequence over time with `streams`

1
Futures in Sequence.md Normal file
View File

@ -0,0 +1 @@
# Streams: Futures in Sequence