No.
In the most general case, you can consider your code to be in one of two states, “Processing” or “Waiting”. The “Waiting” state can be broadly divided into two sub-states - “Waiting for CPU” or “Waiting for IO”. It’s only the “Waiting for IO” state that exhibits a performance or scalability benefit from an async environment, and the degree of that benefit depends upon the relative amount of time spent in that state compared to the time required for the remainder of the process.
Again, keep in mind that all this is being written by someone who says -
and
This is another one of those situations that I’ve actually seen the mindset come full circle.
I’ve worked with “cooperative processing”-type systems before - going all the way back to having programmed for Windows 3.0, GEM, the original MacOS, Netware 2.2, DesqView, and probably one or two others that escape me at the moment.
It was a huge step forward when the 386 made true preemptive multitasking reasonable (not to forget the Atari 800 and MP/M, but those are special cases), and you stopped having to worry about yielding the CPU on a periodic basis - let the operating system do that.
In a lot of cases, this push to go back to a cooperative processing model feels like a huge step backward. Call it “misplaced or premature optimization”. The hardware that really demanded this is pretty much obsolete.
Again, I acknowledge situations where there’s definitely value to doing it. I just don’t see where they’re as common as what some people seem to believe.