Rust futures/async-等待奇怪的行为



编辑:

我交换了所有未来的/async调用,只生成了一个新线程,但程序运行需要15秒,而不是预期的1秒。原因是什么?total_send_time是生成一个新线程所花费的时间+在启动一个新的线程之前所花费的睡眠时间。注意:我正在尝试以一定的速率统一发送请求。

Interval between calls: 0.001 s
Ran in 15391 ms, total time: 1.0015188 s
use std::time::Duration;
use std::ops::Sub;
use std::net::TcpStream;
use std::io::Read;
const NANOSECOND: u64 = 1000000000;
const SECOND: u64 = 1;
const RPS: u64 = 1000;
const N: u64 = 1000;
fn send_request() {
let mut stream = TcpStream::connect("127.0.0.1:8080").unwrap();
let mut buffer = [0; 1024];
stream.read(&mut buffer).unwrap();
}
fn main() {
let duration: u64 = ((SECOND as f64 / RPS as f64) as f64 * NANOSECOND as f64) as u64;
println!("Interval between calls: {} s", (SECOND as f64 / RPS as f64));
let start = std::time::Instant::now();
let mut total_send_time: u128 = 0;
for i in 0..N {
let start_in = std::time::Instant::now();
std::thread::spawn(move || send_request());
let time_to_sleep = ((duration as i128 - start_in.elapsed().as_nanos() as i128) as i128).abs();
total_send_time += start_in.elapsed().as_nanos();
if time_to_sleep > 0 {
std::thread::sleep(Duration::from_nanos(time_to_sleep as u64));
total_send_time += time_to_sleep as u128;
}
}
println!("Ran in {} ms, total time: {} s", start.elapsed().as_millis(), total_send_time as f64 / NANOSECOND as f64);
}

原件:

我是rust的新手,我读过在rust中使用futuresasync / await的文章,并用它构建了一个简单的tcp服务器。然后我决定写一个快速基准测试,以恒定的速率向服务器发送请求,但我遇到了一些奇怪的问题。

下面的代码应该每0.001秒发送一次请求,而且确实如此,除了程序报告奇怪的运行时间。这是输出:

Interval between calls: 0.001 s
Ran in 15 s, total time: 1 s

显然,获取系统时间和计算睡眠时间是有成本的,但肯定不是14秒。我做错了什么?

use async_std::net::TcpStream;
use futures::AsyncReadExt;
use std::time::Duration;
use async_std::task::spawn;
use std::ops::Sub;
const RPS: u64 = 1000;
const N: u64 = 1000;
async fn send_request() {
let mut stream = TcpStream::connect("127.0.0.1:8080").await.unwrap();
let mut buffer = [0; 1024];
stream.read(&mut buffer).await.unwrap();
}
#[async_std::main]
async fn main() {
let duration: u64 = ((1 as f64 / RPS as f64) as f64 * 1000000000 as f64) as u64;
println!("Interval between calls: {} s", (1 as f64 / RPS as f64));
let start = std::time::Instant::now();
let mut total_send_time: u128 = 0;
for _ in 0..N {
let start_in = std::time::Instant::now();
spawn(send_request());
let time_to_sleep = ((duration as i128 - start_in.elapsed().as_nanos() as i128) as i128).abs();
total_send_time += start_in.elapsed().as_nanos();
if time_to_sleep > 0 {
std::thread::sleep(Duration::from_nanos(time_to_sleep as u64));
total_send_time += time_to_sleep as u128;
}
}
println!("Ran in {} s, total time: {} s", start.elapsed().as_secs(), total_send_time / 1000000000)
}

您没有正确测量经过的时间:

  1. total_send_time测量spawn()调用的持续时间,但由于实际任务是异步执行的,因此start_in.elapsed()不会向您提供有关任务实际花费的时间的任何信息。

  2. start.elapsed()测量的ran in时间也根本没有用处。当你使用阻塞睡眠操作时,你只是在测量你的应用程序在std::thread::sleep()中花费了多少时间

  3. 最后但同样重要的是,由于第1点中提到的问题,您的time_to_sleep计算完全不正确。

编辑

正如我在前面的回答中所解释的,你的程序运行需要15秒,因为你没有正确计算睡眠时间。还有其他错误,比如在异步函数中使用阻塞睡眠等;这是一个更正的版本:

use std::time::{Duration, Instant};
const TASKS: u64 = 1000;
const TASKS_PER_SECOND: u64 = 1000;
#[async_std::main]
async fn main() -> std::io::Result<()> {
let micros_per_task = Duration::from_micros(
Duration::from_secs(1).as_micros() as u64 / TASKS_PER_SECOND
);
let mut spawn_overhead = Duration::default();
let before_spawning = Instant::now();
for _ in 0..TASKS {
let task_start = Instant::now();
async_std::task::spawn(task());
let elapsed = task_start.elapsed();
spawn_overhead += elapsed;
if elapsed < micros_per_task {
let sleep = micros_per_task - elapsed;
async_std::task::sleep(sleep).await;
}
}
let elapsed_spawning = before_spawning.elapsed();
println!("Took {}ms to spawn {} tasks", elapsed_spawning.as_millis(), TASKS);
println!("Micros spent in spawn(): {}", spawn_overhead.as_micros());
Ok(())
}
async fn task() {
async_std::task::sleep(Duration::from_millis(1000)).await;
}

最新更新