Introduction

Before we go on to the configuration, a few concepts on bandwidth management need to be understood in order to design the bandwidth allocation methodology. Here goes.

  1. Traffic shaping works at the device queue level just prior to the packet getting pumped out of the interface.

  2. Packets are queued and iproute2 patched kernel provides a method of building multiple virtual queues for every physical device.

  3. Queues do not exist for virtual devices like eth0:n. (Am I wrong here?)

  4. Each queue has a queue discipline which decides how the packet scheduling takes place.

  5. In a router scenario, many users have often asked how bandwidth could be allocated between incoming and outgoing traffic.

    1. Since scheduling can happen only for outgoing traffic, incoming traffic can only be "policed" and incoming rate limited. Rules of source or destination cannot be applied to it. The ingress device is used to police incoming packet rates.

    2. In a router scenario, if bandwidth is shaped on the internet interface, it shapes out going traffic. Similarly, if the bandwidth is shaped on the internal LAN interface, it would simulate bandwidth management of incoming traffic on the internet interface. Obviously, this does not consider traffic to the router itself or to the DMZ interface. Bandwidth shaping on the DMZ interface would take care of that segment. By and large, this meets all the requirement of shaping incoming and outgoing traffic.

    3. In the scenario above, the bandwidth required by a node/application is distinctly split as incoming and outgoing; since they are applied on different interfaces, bandwidth cannot be borrowed between the two. In many cases like those of ISPs, the bandwidth allocation is for incoming and outgoing combined. Under such situations, in stock linux, a virtual device called IMQ has been created thro? which all traffic passes. Thus shaping on IMQ will enable shaping total traffic and not incoming and outgoing separately. IMQ is, currently, not available in LEAF.

  6. iproute2 allows creation of a hierarchy of nodes and traffic can enter the queue at any node. This redirection of flow is done thro? filters. The uppermost node is normally called the root (mnemonic but tc works using numeric handles; thus any name can be chosen). The maximum bandwidth for the root for that interface is given between 96-99% of the actual link speed. This ensures that the queue is built up and managed on the LEAF system and not on the router or modem that handles the physical link.

  7. The last node in the hierarchy is normally referred to as a leaf. The intermediate nodes and leaf nodes are created using tc classes command and qdiscs are attached to them. Behaviour of the qdiscs are documented well in http://www.docum.org.

  8. A class has two queue disciplines or schedulers attached to it. One is for scheduling packets when they are within the limits of the class specified. The other is for arbitering spare bandwidth. Thus htb is the packet scheduler within limits and sfq is the scheduler for distributing spare bandwidth as examples.

  9. Packets are redirected to classes by filters. Modules used for filtering are called classifiers e.g. u32, fwmark etc. qos-htb uses the u32 classifier to classify traffic.

  10. Classes have priorities and so do filter rules. Class priorities define bandwidth sharing priorities while filter priorities define order in which filters will be applied to classify packets and redirect traffic to classes.

  11. For bandwidth management to work well, it is recommended that the sum of the rates of leafs/nodes equals the rate of the parent node.

Qos-htb uses a queuing discipline called htb written by Martin Devera (http://luxik.cdi.cz/~devik/qos email: devik@cdi.cz ). Qos-htb is a tc script generator that generates tc scripts (just as Shorewall generates iptables scripts) as per higher level definitions in qos-htb configuration file.