
Imagine we have stage A which reads message from the network and stage B processing the messages read by A. If both use the same threadpool and mltiple clients send a large messages (ready will take some significant time) stage A coul possibly allocate all threads in the pool so not work-item in stage B can be executed. If both stages use different pools the processing of work-items is independent. B can executes its task even if stage A becomes a high load.
Why the A would allocate all the the threads in the pool? If the stage A handle a message and the request B to do somthing the task of B will be enqueued before other new messages comming from the network, so B will be interleaved with A. What I'm missing?
data-flow: network -> stage A -> stage B -> network If 10.000 clients are connected to the service and 1% of the clients send a large message then stage A is triggered to read/process 100 requests at the same time - possibly all threads of the pool are used by stage A and tasks of stage B are queued in the pool until a worker-thread are finished with items of A. regards, Oliver -- GMX Download-Spiele: Preizsturz! Alle Puzzle-Spiele Deluxe über 60% billiger. http://games.entertainment.gmx.net/de/entertainment/games/download/puzzle/in...