Nodes’ workload over time.
We may have all come in different ships, but we’re in the same boat nowMartin Luther King Jr
The object of this post is to define the different nodes’ working load conditions addressed in this series.
--------------------
Even though the code that runs in the processing nodes is the same, the Latency and Throughput of the processing nodes are not always constant over time, nor are they the same for all of them. This occurs, for instance, when the algorithm is implemented in such a way that the processing to be performed on the data depends on the value of the data itself. In cases like this, on one hand, the workload of the nodes (workload) varies over time, and on the other, an imbalance of the workload among the nodes (unbalanced workload) takes place.
By steady workload, we mean that all the processing nodes have the same workload. In addition, the input, processing, and output nodes can have different workloads but all of them are constant over the time.
In real life, the nodes' workload is not constant. Usually, the nodes have more things to do than the processing; for instance, to run the operating system. Anyway, once calculated the minimum number of buffers under the condition of steady workload, we will continue considering that the nodes work under steady workload while that number of buffers is enough for the machine to work properly.
A non-steady workload over time may affect the number of buffers per node required for ensuring no-data-loss. An unbalanced workload among nodes may affect Data redistribution transferences. Both cases are considered later.
In the writing of this article, Chico Buarque & Elis Regina (Noite dos mascarados) have collaborated in an involuntary but decisive way.
---------------------
1. Picture: http://remo.diariovasco.com/2009/_images/concurso2.jpg
2. I want to thank Carol G. for her revision of this text.
0 Comments