6 Single Flow Crossing Several Servers
The previous chapter focused on the analysis of a single flow crossing a single server. Before handling more complex topologies in Part 3, we consider in this chapter the fundamental case of one flow of data crossing a sequence of servers. This is a typical network in the network calculus literature for several reasons. First, it can be easily solved using (min,plus) operations: the main result of this chapter is the correspondence between concatenation of servers and (min,plus) convolution. Second, we will see in the next chapters that one solution to analyze more general topologies is to individualize the service curves, and the final step of the analysis is to compute the performance of one flow crossing a sequence of servers.
The algebraic formulation enables us to compute performance guarantees for the flow, such as delays. Intuitively, worst-case delay of a bit of data crossing a sequence of servers is smaller than the sum of the worst-case delays for each server computed with the results of Chapter 5. This is known as the pay burst only once phenomenon. This will be discussed in section 6.1.
Another type of network involving one flow of data is for control purposes. This can be performed by adding a server in front of a network in order to gain some control over the departure process of this added server, i.e. the arrival process in the rest of the network. Another solution is to add a feedback loop that regulates the arrivals in ...
Get Deterministic Network Calculus now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.