The hard truth is that there's a diminishing return when applying more and more concurrent computational resources to a problem. Performing parallel computations implies some coordination overhead—spawning new threads, chunking data, and memory bus issues in the presence of barriers or fences, depending on your CPU. Parallel computing is not free. Consider this Hello, world! program:
fn main() { println!("GREETINGS, HUMANS"); }
Straightforward enough, yeah? Compile and run it 100 times:
hello_worlds > rustc -C opt-level=3 sequential_hello_world.rs hello_worlds > time for i in {1..100}; do ./sequential_hello_world > /dev/null; done real 0m0.091s user 0m0.004s sys 0m0.012s
Now, consider basically the same program but involving the overhead of spawning a thread:
use std::thread; fn main() { ...