SYSTEM ADMINISTRATOR GUIDE     56/1543-CRA 119 1170/1-V1 Uen C    

Configuring Scheduling

© Ericsson AB 2009-2010. All rights reserved. No part of this document may be reproduced in any form without the written permission of the copyright owner.

Disclaimer

The contents of this document are subject to revision without notice due to continued progress in methodology, design and manufacturing. Ericsson shall have no liability for any error or damage of any kind resulting from the use of this document.

Trademark List

SmartEdge is a registered trademark of Telefonaktiebolaget LM Ericsson.

Contents

1Overview
1.1Queue Maps
1.2Priority Queuing Policies
1.3Enhanced Deficit Round-Robin Policies
1.4Modified Deficit Round-Robin Policies
1.5Asynchronous Transfer Mode Weighted Fair Queuing Policies
1.6Priority Weighted Fair Queuing Policies
1.7Congestion Management and Avoidance
1.8Overhead Profiles
1.9Port Grouping for Traffic Scheduling

2

Configuration and Operations Tasks
2.1Configure a Queue Map
2.2Configure a Congestion Avoidance Map
2.3Configure an ATMWFQ Policy
2.4Configure an EDRR Policy
2.5Configure an MDRR Policy
2.6Configure a PQ Policy
2.7Configure a PWFQ Policy
2.8Configure an Overhead Profile
2.9Configure User-Defined Port Group Map and Apply It to a Card
2.10Apply a Predefined or Default Port Group Map to a Card
2.11Operations Tasks

3

Configuration Examples
3.1Queue Maps
3.2Congestion Avoidance Map for Multidrop Profiles
3.3ATMWFQ Policies
3.4EDRR Policy
3.5MDRR Policy
3.6PQ Policies
3.7PWFQ Policies
3.8Overhead Profiles

4

QoS Port Group Maps


1   Overview

This document provides an overview of the SmartEdge® router quality of service (QoS) scheduling policy features and describes the tasks used to configure, monitor, and administer these features. This document also provides examples of QoS scheduling policy configurations.

For information about other QoS configuration tasks and commands, see the following documents:

Note:  
In this document, the term, first-generation Asynchronous Transfer Mode (ATM) OC traffic card, refers to a 2-port ATM OC-3c/STM-1c or ATM OC-12c/STM-4c traffic card; similarly, the term, second-generation ATM OC traffic card, refers to a 4-port ATM OC-3c/STM-1c or Enhanced ATM OC-12c/STM-4c traffic card.

The first-generation Asynchronous Transfer Mode (ATM) OC traffic cards follow:

The second-generation ATM OC traffic cards follow:

The terms, traffic-managed circuit and traffic-managed port, refer to a circuit and port, respectively, Fast Ethernet-Gigabit Ethernet (FE-GE, 60FE-2GE), Gigabit Ethernet 3 (GE3), Gigabit Ethernet 1020 (GE1020), and 20-Port Gigabit Ethernet Card (ge4-20-port) traffic cards, and Gigabit Ethernet media interface cards (GE MICs).


QoS scheduling policies create and enforce levels of service and bandwidth rates, and prioritize how packets are scheduled into egress queues. Incoming queues on outbound traffic cards have associated scheduling parameters such as rates, depths, and relative weights. The traffic card’s scheduler draws packets from the incoming queues based on weight, rate, or strict priority:

After classification, marking, and rate-limiting occurs on an incoming packet, the packet is placed into an output queue for servicing by an egress traffic card’s scheduler. The SmartEdge router supports up to eight queues per circuit. Queues are serviced according to a queue map scheme, a QoS scheduling policy, or both, as described in the following sections.

1.1   Queue Maps

By default, the SmartEdge router assigns a priority group number to an egress queue, according to the number of queues configured on a circuit; see Table 1.

Table 1    Default Mapping of Packets into Queues Using Priority Groups

Priority Group

DSCP Value

IP Prec

MPLS EXP

802.1p

8 Queues

4 Queues

2 Queues

1 Queue

0

Network control

7

7

7

Queue 0

Queue 0

Queue 0

Queue 0

1

Reserved

6

6

6

Queue 1

Queue 1

Queue 1

Queue 0

2

Expedited Forwarding (EF)

5

5

5

Queue 2

Queue 1

Queue 1

Queue 0

3

Assured Forwarding (AF) level 4

4

4

4

Queue 3

Queue 2

Queue 1

Queue 0

4

AF level 3

3

3

3

Queue 4

Queue 2

Queue 1

Queue 0

5

AF level 2

2

2

2

Queue 5

Queue 2

Queue 1

Queue 0

6

AF level 1

1

1

1

Queue 6

Queue 2

Queue 1

Queue 0

7

Default Forwarding (DF)

0

0

0

Queue 7

Queue 3

Queue 1

Queue 0

You can configure a customized queue map and assign it to any scheduling policy. The map overrides the default mapping of packets into the egress queues of the policy to which it is assigned; see Figure 1. When the scheduling policy is attached to a circuit, it overrides the default queue map.

Figure 1   Queue Map (633)

1.2   Priority Queuing Policies

When a priority queuing (PQ) policy is enabled on a circuit, its output queues are serviced in strict priority order; that is, packets waiting in the highest-priority queue (queue 0) are serviced until that queue is empty, then packets waiting in the second-highest priority queue are serviced (queue 1), and so on. Under congestion, a PQ policy allows the highest priority traffic to get through, at the expense of lower-priority traffic.

With a PQ policy, the potential exists for a high volume of high-priority traffic to completely starve low-priority traffic. To prevent such starvation, the SmartEdge router allows a rate limit to be configured on each queue, which limits the amount of bandwidth available to a high priority queue. With careful tuning of the rate limits, you can prevent the lower priority queues from being starved.

Note:  
PQ policies are not supported on ATM DS-3 and second-generation ATM OC traffic cards.

1.3   Enhanced Deficit Round-Robin Policies

Enhanced deficit round-robin (EDRR) policies can operate in one of three modes: normal, strict, or alternate:

With EDRR policies, each queue has an associated quantum value and a deficit counter. The quantum value is derived from the configured weight of the queue. A quantum value is the average number of bytes served in each round; the deficit counter is initialized to the quantum value. Packets in a queue are served as long as the deficit counter is greater than zero. Each packet served decreases the deficit counter by a value equal to its length in bytes. At each new round, each nonempty queue’s deficit counter is incremented by its quantum value; see Figure 2.

Note:  
EDRR policies are not supported on ATM DS-3 and second-generation ATM OC traffic cards.

Figure 2   EDRR Strict Mode Scheduling (629)

1.4   Modified Deficit Round-Robin Policies

Modified deficit round-robin (MDRR) policies support the following features:

For information about EDRR scheduling modes, see Enhanced Deficit Round-Robin Policies; for information about PQ scheduling, see Priority Queuing Policies.

MDRR policies apply to ports that are members of link groups, including ports on hitless access link groups. Statistics supported for MDRR as the same as those for PWFQ. MDRR policies can be configured on hitless access link groups for the following circuit types:

When more than one port is active in an access link group, the system selects one of the active ports as an egress port for the circuit’s traffic. By default, the system attempts to distribute circuits evenly across all active ports using a round-robin algorithm. For example, if there are two active ports in the link group, half of the circuits will use the one active port for egress traffic and the other half will use the other active port for egress traffic. You can change this behavior by using the protect-group incoming-port command (in access link-group configuration mode); in this case, subscriber egress traffic will egress on the same port on which the subscriber authentication request came in. Then, for example, if a PPPoE subscriber request was received on port 2, the subscriber’s egress traffic will egress on port 2.

When you configure MDRR policies, keep the following limitations in mind:

Table 2    Total Number of 802.1Q tunnels, 802.1Q PVCs, and Subscribers Configured with Their Own MDRR Policy on Specific Traffic Cards

num-queues Configuration in MDDR policy

Total number of 802.1Q tunnels, 802.1Q PVCs, and Subscribers That Can be Configured with Their Own MDRR Policy

On a 1x10GE Traffic Card

On a 4x10GE Traffic Card

num-queues equal to 8

1,700

3,400

num-queues equal to 4 or fewer

490

980

When you configure MDRR policies on hitless access link groups, keep the following limitations in mind:

1.5   Asynchronous Transfer Mode Weighted Fair Queuing Policies

Asynchronous Transfer Mode weighted fair queuing (ATMWFQ) policies ensure that queues do not starve for bandwidth and that traffic obtains predictable service. These policies operate in one of two modes: alternate and strict. In either mode, the ATM segmentation and reassembly (SAR) uses a class-based WFQ algorithm to perform QoS priority packet scheduling. In strict mode, queue 0 is serviced immediately and the other queues are serviced in a round-robin fashion according to their configured weights. In alternate mode, the servicing of queues alternates between queue 0 and the remaining queues, according to their configured weights. Queue 0 is served, then the next queue is served. Queue 0 is served again, and the next queue in turn is served, and so on. For example, if there are four queues configured, the order of servicing will be q0, q1, q0, q2, q0, q3, q0, q1, and so on.

Note:  
ATMWFQ policies are not supported on first-generation ATM OC traffic cards.

1.6   Priority Weighted Fair Queuing Policies

Priority weighted fair queuing (PWFQ) policies use a priority- and a weight-based algorithm to implement hierarchical QoS-aware scheduling. Each queue in the policy includes both a priority and a relative weight, which control how each queue is serviced. Inside the PWFQ policy, priority takes precedence, and for queues placed at the same priority, the individual configured weight defines how the queue is used in the scheduling decision. You can attach PWFQ policies to Layer 2 and Layer 3 circuits.

Hierarchical scheduling enables scheduling at the port, 802.1Q tunnel, and 802.1Q permanent virtual circuit (PVC) levels, using PWFQ policies. It also enables QoS shaping for subscriber sessions using PWFQ policies attached to hierarchical nodes and node groups, so that four levels of scheduling are possible (hierarchical node, 802.1Q PVC, 802.1Q tunnel, and port levels). Scheduling modes include:

1.7   Congestion Management and Avoidance

The SmartEdge router employs the following congestion avoidance features when processing packets using the different queuing and scheduling policies.

1.7.1   Random Early Detection

With scheduling policies, you can configure random early detection (RED) parameters to manage buffer congestion by signalling to sources of traffic that the network is on the verge of entering a congested state, rather than waiting until the network is actually congested. The technique is to drop packets with a probability that varies as a function of how many packets are waiting in a queue at any particular time, and the minimum and maximum average queue depth.

When a queue is nearly empty, the probability of dropping a packet is small. As the queue’s average depth increases, the likelihood of dropping packets becomes greater; see Figure 3.

Note:  
For ATM DS-3 and second-generation ATM OC traffic cards, and Ethernet traffic cards that support RED, the queue depth value is equal to the value configured for the maximum threshold.

Figure 3   Probability of Being Dropped as a Function of Queue Depth (557)

1.7.2   Early Packet Discard

With ATMWFQ policies, you can also configure early packet discard (EPD), a congestion avoidance mechanism that starts dropping packets after queues reach the EPD threshold. When queue buffers are nearly full (reaching the EPD threshold), the system is signaled that it may become congested. Any packets trying to enter queues, after the EPD threshold has been met, are dropped.

1.7.3   Multidrop Precedence

With ATMWFQ and PWFQ policies, you can configure different congestion behaviors that depend on the DSCP values of the packets in a queue; this feature is referred to as multidrop precedence. Multidrop precedence supports up to three profiles for each queue, and each profile defines a different congestion behavior for one or more DSCP values. Each profile is also characterized by its RED parameter values. The DSCP value in the packet is used to select the profile that governs its congestion avoidance behavior.

Figure 4 shows how the three profiles can be defined with different minimum and maximum thresholds. Multidrop profiles are available only for ATMWFQ and PWFQ policies and are configured using congestion avoidance maps.

Figure 4   Multidrop Profiles (852)

1.7.4   Congestion Avoidance Maps

A congestion avoidance map specifies how congestion avoidance is managed for a set of queues. Each map supports eight queues.

Note:  
Congestion avoidance maps are supported only for ATMWFQ, MDRR, and PWFQ policies.

For each queue, you define up to three profiles, each of which describes the congestion behavior for one or more DSCP values. The map specifies RED parameters for every queue. One of the profiles, the default profile, specifies the default congestion behavior for every DSCP value.

When you define either of the other profiles for a queue, the system removes the DSCP values that you specify from the default profile. If a congestion map is not assigned to an ATMWFQ, MDRR, or PWFQ policy, packets are dropped only when the maximum queue depth is exceeded.

1.7.5   Queue Depth

With EDRR, PQ, and PWFQ policies, you can modify the number of packets allowed per queue on a circuit. Queue depth is configured for PWFQ policies with the congestion avoidance map that you assign to the policy and for EDRR and PQ policies with the queue depth command (in EDRR and PQ policy configuration mode). For default and maximum queue depth values for various port types, see Queue Depth Values by Port Type.

1.7.6   Queue Rates

With EDRR, MDRR, and PQ policies, you can configure a rate limit. In PQ policies, the rate is controlled on each individual queue through the queue rate command (in PQ policy configuration mode). In EDRR and MDRR policies, the rate is a combined traffic rate for all queues in the policy and is configured through the rate command (in EDRR policy and MDRR policy configuration modes, respectively). A reasonable guideline for burst tolerance is to allow one to two seconds of burst time on the defined queue rate.

1.8   Overhead Profiles

The SmartEdge router can take the encapsulation overhead of the access line into consideration so that the rate of traffic does not exceed the permitted traffic rate on the line. This downstream traffic shaping is controlled by QoS overhead profiles.

The overhead profile works in conjunction with the PWFQ policy. The PWFQ defines the rate of traffic flow; the overhead profile defines the encapsulation overhead and the available bandwidth on the access line. The rate can come from one of the following sources:

1.9   Port Grouping for Traffic Scheduling

You can assign the ports of a traffic card that supports traffic management into different groups to customize the performance of traffic scheduling. These groups are referred to as scheduling port groups or simply port groups.

The ports within a port group share scheduling capacity within the group. For example, if one port is transmitting large packets and another is transmitting small packets, the port transmitting small packets, which requires more scheduling processing, can borrow capacity that is not needed by the port transmitting larger packets.

Port grouping allows you to manage the balance between scheduling performance and forwarding performance on a card. Each port group in use consumes processing capacity that would otherwise be available for packet forwarding. Defining more port groups results in higher scheduling performance but lower forwarding performance.

Each port map defined must be associated with a particular card type, and can only be referenced by cards of that type. Each port of the card must be assigned to one and only one port group.

The following list shows an example port map with five port groups for a GE 10-port card; each port group maps to two ports:

The SmartEdge router supports the following types of port group maps:

A port group map that is currently referenced by one or more cards may not be modified. You must remove all card configuration references to a particular port map before modifying it.

A configuration of a card can be modified to reference a different port-map or revert to the default port-map. However, such a port-map change is applied immediately only if that card is unlocked. A card is considered to be locked for port group map purposes if any PWFQ or other traffic management configuration is currently applied to any of the ports of the card. If the card is locked, then it must be reinitialized by using the reload card command for the port map change to take effect. The show qos port-map bind command can be used to determine whether a card is currently locked or unlocked for this purpose.


 Warning! 
Using the reload card command does result in the temporary loss of all traffic carried by the port.

The SmartEdge router supports a maximum of eight port groups per card and a maximum of 64 ports for each card. The actual number of port groups and ports supported on a given card depends on the card type. The following cards support traffic management:

2   Configuration and Operations Tasks

To configure scheduling policies, perform the tasks described in the following sections.

Note:  
In this section, the command syntax in the task tables displays only the root command; for the complete command syntax, see Command List.

2.1   Configure a Queue Map

The SmartEdge router assigns a factory preset, or default, mapping of priority groups to queues, according to the number of queues configured. You can customize this mapping for the circuits to which any QoS scheduling policy is attached. To configure a queue map, perform the tasks in Table 3.

Table 3    Configure a Queue Map

Task

Root Command

Notes

Create or select a queue map and access queue map configuration mode.

qos queue-map

Enter this command in global configuration mode.

Specify the number of queues for the queue map and access num-queues configuration mode.(1)

num-queues

Enter this command in queue map configuration mode.

Customize the mapping of priority groups to queues.

queue priority

Enter this command in num-queues configuration mode.

(1)  For information about the correlation between the number of ATMWFQ queues configured on a particular traffic card type and the corresponding number of PVCs allowed (per port and per traffic card), see Configuring Circuits.


2.2   Configure a Congestion Avoidance Map

By default, the SmartEdge router drops packets at the end of the queue when the number of packets exceeds the configured maximum depth of the queue. A congestion avoidance map, when attached to an ATMWFQ, MDRR, or PWFQ scheduling policy, provides congestion management behavior for each queue defined by the policy.

To configure a congestion avoidance map, perform the tasks described in Table 4; enter all commands in congestion map configuration mode, unless otherwise noted.

Table 4    Configure a Congestion Avoidance Map

Task

Root Command

Notes

Create or select a congestion avoidance map and access congestion map configuration mode.

qos congestion-avoidance-map

Enter this command in global configuration mode.

Set the RED parameters for each queue in the map.

queue red

Perform this task for each queue in the map.

Set the exponential-weight for each queue in the map.

queue exponential-weight

Enter this command for each queue in the map.

Specify the depth of a queue.

queue depth

This command applies only to congestion avoidance maps for PWFQ policies only.


Enter this command for each queue in the map.

2.3   Configure an ATMWFQ Policy

You can configure an ATMWFQ policy with either RED or EPD parameters. To configure an ATMWFQ policy with RED parameters, using a congestion avoidance map, perform the tasks described in Table 5; enter all commands in ATMWFQ policy configuration mode, unless otherwise noted.

Table 5    Configure an ATMWFQ Policy with RED Parameters

Task

Root Command

Notes

Create the policy name and access ATMWFQ policy configuration mode.

qos policy atmwfq

Enter this command in global configuration mode.

Optional. Configure the policy with any or all of the following tasks:

   

Assign a queue map to the policy.

queue-map

 

Specify the number of queues for the policy.(1)

num-queues

By default, the number of queues is 4.

Assign a congestion avoidance map to the policy.

congestion-map

By default, no congestion map is assigned.

Define the algorithm for queue 0.

queue 0 mode

By default, the queue mode is alternate.

Specify the traffic weight for each queue.

queue weight

By default, the weight is 2.

(1)  For information about the correlation between the number of queues and the number of VCs, see Configuring Circuits.


To configure an ATMWFQ policy with EPD parameters, perform the tasks described in Table 6; enter all commands in ATMWFQ policy configuration mode, unless otherwise noted.

Table 6    Configure an ATMWFQ Policy with EPD Parameters

Task

Root Command

Notes

Create the policy name and access ATMWFQ policy configuration mode.

qos policy atmwfq

Enter this command in global configuration mode.

Configure the policy with any or all of the following tasks:

 
 

Assign a queue map to the policy.

queue-map

 

Specify the number of queues for the policy.(1)

num-queues

By default, the number of queues is 4.

Modify congestion parameters for each queue.

queue congestion epd

 

Define the algorithm for queue 0.

queue 0 mode

By default, the queue mode is alternate.

Specify the traffic weight for each queue.

queue weight

By default, the weight is 2.

(1)  For information about the correlation between the number of queues and the number of VCs, see Configuring Circuits.


2.4   Configure an EDRR Policy

To configure an EDRR policy, perform the tasks described in Table 7; enter all commands in EDRR policy configuration mode, unless otherwise noted.

Table 7    Configure an EDRR Policy

Task

Root Command

Notes

Create the policy name and access EDRR policy configuration mode.

qos policy edrr

Enter this command in global configuration mode.

Optional. Configure the policy with any or all of the following tasks:

   

Assign a queue map to the policy.

queue-map

 

Specify the number of queues for the policy.

num-queues

By default, the number of queues is 8.

Specify the depth of a queue.

queue depth

You can enter this command for each queue.

Set RED parameters per queue.

queue red

By default, RED is disabled.

Specify the traffic weight per queue.

queue weight

By default, the traffic weight is 0.

Set a rate limit for the policy.

rate

By default, there is no rate limit.

2.5   Configure an MDRR Policy

To configure an MDRR policy, perform the tasks described in Table 8; enter all commands in MDRR policy configuration mode, unless otherwise noted.

Table 8    Configure an MDRR Policy

Task

Root Command

Notes

Create the policy name and access MDRR policy configuration mode.

qos policy mdrr

Enter this command in global configuration mode.

Optional. Configure the policy by completing any or all of the following tasks:

   

Assign a queue map to the policy.

queue-map

 

Specify the number of queues for the policy.

num-queues

By default, the number of queues is 8.

Assign a congestion avoidance map to the policy.

congestion-map

 

Specify the scheduling algorithm.

qos mode (MDRR)

By default, the mode is normal.

Specify the traffic weight per queue.

queue weight

By default, the traffic weight is 0.

Set a rate limit for the policy.

rate

By default, there is no rate limit.

2.6   Configure a PQ Policy

To configure a PQ policy, perform the tasks described in Table 9; enter all commands in PQ policy configuration mode, unless otherwise noted.

Table 9    Configure a PQ Policy

Task

Root Command

Notes

Create or select the policy and access PQ policy configuration mode.

qos policy pq

Enter this command in global configuration mode.

Optional. Configure the policy with any or all of the following tasks:

 

Enter these commands in PQ policy configuration mode.

Assign a queue map to the policy.

queue-map

 

Specify the number of queues for the policy.

num-queues

By default, the number of queues is 8.

Specify the depth of a queue.

queue depth

You can enter this command for each queue.

Set a rate limit per queue.

queue rate

By default, there is no rate limit.

Set RED parameters per queue.

queue red

By default, RED is disabled.

2.7   Configure a PWFQ Policy

To configure a PWFQ policy, perform the tasks described in Table 10; enter all commands in PWFQ policy configuration mode, unless otherwise noted.

Table 10    Configure a PWFQ Policy

Task

Root Command

Notes

Create the policy name and access PWFQ policy configuration mode.

qos policy pwfq

Enter this command in global configuration mode.

Optional. Configure the policy with any or all of the following tasks:

   

Assign a queue map to the policy.

queue-map

 

Specify the number of queues for the policy.

num-queues

By default, the number of queues is 8.

Assign a congestion avoidance map to the policy.

congestion-map

 

Assign a priority and relative weight to each queue.

queue priority

Enter this command for each queue that you specified with the num-queues command.

Set the maximum and minimum rates for the policy.

rate

You must enter this command to specify the maximum rate; the minimum rate is optional. You cannot set a minimum rate if you also assign a relative weight to this policy.

Assign a relative weight to this policy.

weight

You cannot assign a relative weight if you also set a minimum rate for this policy.

Set the rate for each priority group.

queue priority-group

Enter this command for each priority group.

2.8   Configure an Overhead Profile

To configure an overhead profile, perform the tasks described in Table 11; enter all commands in overhead profile configuration mode, unless otherwise noted.

Table 11    Configure an Overhead Profile

Task

Root Command

Notes

Create or select a QoS overhead profile.

qos profile overhead (global)

 

Create a default rate-factor for the overhead profile.

rate-factor

 

Create a default encapsulation access-line type for the overhead profile.

encaps-access-line

 

Create a default number of reserved bytes, per packet.

reserved

 

Configure overhead parameters for the specified DSL data type in the overhead profile.

type (DSL)

 

Define the percentage of bandwidth that is unavailable to traffic on the circuit, port, or subscriber record to which the QoS policy is attached to the overhead profile for a specific access-line type in the overhead profile.

rate-factor

Enter this command in overhead type configuration mode.

Specify an encapsulation type for a specific access-line type within the overhead profile.

encaps-access-line

Enter this command in overhead type configuration mode.

Specify the reserved bytes, per packet, for a specific access-line type within the overhead profile.

reserved

Enter this command in overhead type configuration mode.

2.9   Configure User-Defined Port Group Map and Apply It to a Card

To configure a user-defined port group map and then apply it to a traffic card that supports port groups, perform the tasks described in Table 12; enter all commands in the specified configuration mode.

Table 12    Configure a User-Defined Port Group Map and Apply It to a Card

Task

Root Command

Notes

Define the name of a port group map for a specified traffic card and enter port group map configuration mode.

qos port-map (global)

Enter this command in global configuration mode.

Define a port group.

group

Enter this command in port group map configuration mode.

Apply the port group map you defined to the card you are configuring.

qos port-map (card)

Enter this command in card configuration mode. Specify the name of the user-defined port group map to apply to the card. The name is displayed as an option. The application of the port group map takes effect after a card reload.

2.10   Apply a Predefined or Default Port Group Map to a Card

To apply a predefined, or default port group map to a traffic card that supports port groups, perform the task described in Table 13; enter the command in the specified configuration mode.

Table 13    Apply a Predefined or Default Port Group Map to a Card

Task

Command

Notes

Apply a port group map to the card you are configuring.

qos port-map (card)

Enter this command in card configuration mode. Specify the name of the predefined or default port group map to apply to the card. The application of the port group map takes effect after a card reload. If no specified port group map is applied, the default port group map is applied.

2.11   Operations Tasks

To monitor and administer QoS scheduling features, perform the appropriate tasks described in Table 14. Enter the debug command in exec mode; enter the show commands in any mode.

Table 14    Monitor and Administer QoS Features

Task

Command

Display the queue assignments for a QoS congestion avoidance map.

show qos congestion-map

Display information about one or more QoS ATMWFQ policies.

show qos policy atmwfq

Display information about one or more QoS EDRR policies.

show qos policy edrr

Display information about one or more QoS PQ policies.

show qos policy pq

Display information about one or more QoS PWFQ policies.

show qos policy pwfq

Display information about one or more configured QoS queue maps.

show qos queue-map

Displays information about a specific QoS port group map or all QoS port group maps, for a specific or all traffic card types that support port groups.

show qos port-map

Displays information about the QoS port group map binding for a traffic card in a specific slot or for all configured traffic cards that support port groups.

show qos port-map bind

3   Configuration Examples

The following sections provide examples of QoS scheduling configurations.

3.1   Queue Maps

The following example creates three queue maps and assigns a custom mapping of priority groups to queues, based on the number of queues configured:

[local]Redback(config)#qos queue-map Custom2

[local]Redback(config-queue-map)#num-queues 2

[local]Redback(config-num-queues)#queue 0 priority 0

[local]Redback(config-num-queues)#queue 1 priority 1 2 3 4 5 6 7

[local]Redback(config-num-queues)#exit



[local]Redback(config)#qos queue-map Custom4

[local]Redback(config-queue-map)#num-queues 4

[local]Redback(config-num-queues)#queue 0 priority 0

[local]Redback(config-num-queues)#queue 1 priority 1 2

[local]Redback(config-num-queues)#queue 2 priority 3 4 5 6

[local]Redback(config-num-queues)#queue 3 priority 7

[local]Redback(config-num-queues)#exit



[local]Redback(config)#qos queue-map Custom8

[local]Redback(config-queue-map)#num-queues 8

[local]Redback(config-num-queues)#queue 0 priority 0

[local]Redback(config-num-queues)#queue 1 priority 1

[local]Redback(config-num-queues)#queue 2 priority 2

[local]Redback(config-num-queues)#queue 3 priority 3

[local]Redback(config-num-queues)#queue 4 priority 4

[local]Redback(config-num-queues)#queue 5 priority 5

[local]Redback(config-num-queues)#queue 6 priority 6

[local]Redback(config-num-queues)#queue 7 priority 7

[local]Redback(config-num-queues)#exit

3.2   Congestion Avoidance Map for Multidrop Profiles

The following example configures the congestion avoidance map, map-red4a, with two profiles for any ATMWFQ policy:

[local]Redback(config)#qos congestion-avoidance-map map-red4a atmwfq

[local]Redback(config-congestion-map)#queue 0 exponential-weight 40

[local]Redback(config-congestion-map)#queue 0 red default min-threshold 30 
max-threshold 5200 probability 16

[local]Redback(config-congestion-map)#queue 0 red profile-1 dscp cs7 min-threshold 140 
max-threshold 13000 probability 34

[local]Redback(config-congestion-map)#queue 0 red profile-2 dscp cs3 min-threshold 230 
max-threshold 15600 probability 50

[local]Redback(config-congestion-map)#queue 3 exponential-weight 13

[local]Redback(config-congestion-map)#queue 3 red default max-threshold 5200

[local]Redback(config-congestion-map)#queue 3 red profile-1 dscp af21 min-threshold 100 
max-threshold 14000 probability 450

3.3   ATMWFQ Policies

The following example configures the ATMWFQ policy, example2, with the map-red4a congestion avoidance map:

[local]Redback(config)#qos policy example2 atmwfq

[local]Redback(config-policy-atmwfq)#num-queues 4

[local]Redback(config-policy-atmwfq)#congestion-map map-red4a

[local]Redback(config-policy-atmwfq)#queue 0 weight 10

[local]Redback(config-policy-atmwfq)#queue 1 weight 20

[local]Redback(config-policy-atmwfq)#queue 2 weight 30

[local]Redback(config-policy-atmwfq)#queue 3 weight 40

[local]Redback(config-policy-atmwfq)#qos 0 mode strict

[local]Redback(config-policy-atmwfq)#exit

The following example configures an ATMWFQ policy, example3, with EPD parameters:

[local]Redback(config)#qos policy example3 atmwfq

[local]Redback(config-policy-atmwfq)#num-queues 4

[local]Redback(config-policy-atmwfq)#queue 0 congestion 
epd max-threshold 5200

[local]Redback(config-policy-atmwfq)#queue 1 congestion 
epd max-threshold 5200

[local]Redback(config-policy-atmwfq)#queue 2 congestion 
epd max-threshold 5200

[local]Redback(config-policy-atmwfq)#qos 0 mode strict

[local]Redback(config-policy-atmwfq)#exit

3.4   EDRR Policy

The following example configures the EDRR policy, example1, and gives queue number 3 30% of the bandwidth of the circuit:

[local]Redback(config)#qos policy example1 edrr

[local]Redback(config-policy-edrr)#queue 3 weight 30

[local]Redback(config-policy-edrr)#exit

3.5   MDRR Policy

The following example configures the MDRR policy, example4, using strict mode with 4 queues and divides the bandwidth between the queues according to an approximate 50:30:10:10 ratio during periods of congestion:

[local]Redback(config)#qos policy example4 mdrr

[local]Redback(config-policy-mdrr)#qos mode strict

[local]Redback(config-policy-mdrr)#num-queues 4

[local]Redback(config-policy-mdrr)#queue-map Custom4

[local]Redback(config-policy-mdrr)#congestion-avoidance-map 

[local]Redback(config-policy-mdrr)#queue 0 rate 310000 burst 40000

[local]Redback(config-policy-mdrr)#queue 1 rate 186000 burst 40000

[local]Redback(config-policy-mdrr)#queue 2 rate 62000 burst 40000

[local]Redback(config-policy-mdrr)#queue 3 rate 62000 burst 40000

[local]Redback(config-policy-mdrr)#exit

3.6   PQ Policies

The following sections provide examples of PQ policies:

3.6.1   RED Parameters

The following example creates a PQ policy, red, and establishes RED parameters for each of the eight queues such that higher priority traffic has a lower probability of being dropped, and lower priority traffic has a higher probability of being dropped:

[local]Redback(config)#qos policy red pq

[local]Redback(config-policy-pq)#queue 0 red 
probability 10 weight 12 min-threshold 1900 max-threshold 5200

[local]Redback(config-policy-pq)#queue 1 red 
probability 9 weight 12 min-threshold 1850 max-threshold 5200

[local]Redback(config-policy-pq)#queue 2 red 
probability 8 weight 12 min-threshold 1800 max-threshold 5200

[local]Redback(config-policy-pq)#queue 3 red 
probability 7 weight 12 min-threshold 1750 max-threshold 5200

[local]Redback(config-policy-pq)#queue 4 red 
probability 6 weight 12 min-threshold 1700 max-threshold 5200

[local]Redback(config-policy-pq)#queue 5 red 
probability 5 weight 12 min-threshold 1650 max-threshold 5200

[local]Redback(config-policy-pq)#queue 6 red 
probability 4 weight 12 min-threshold 1600 max-threshold 5200

[local]Redback(config-policy-pq)#queue 7 red 
probability 1 weight 12 min-threshold 1550 max-threshold 5200

[local]Redback(config-policy-pq)#exit

3.6.2   Rate-Limiting

The following example configures a PQ policy with 4 queues and divides the bandwidth between the queues according to an approximate 50:30:10:10 ratio during periods of congestion. This guarantees that even the lowest priority queue gets a share of bandwidth in the presence of congestion and strict priority queuing:

[local]Redback(config)#qos policy pos-qos pq

[local]Redback(config-policy-pq)#num-queues 4

[local]Redback(config-policy-pq)#queue 0 rate 310000 burst 40000

[local]Redback(config-policy-pq)#queue 1 rate 130000 burst 40000

[local]Redback(config-policy-pq)#queue 2 rate 62000 burst 40000

[local]Redback(config-policy-pq)#queue 3 rate 62000 burst 40000

[local]Redback(config-policy-pq)#exit 

The following example creates a policy, pos-rate, and rate-limits traffic in queue 0 to 300 Mbps when there is congestion on the port. When there is no congestion on the port, the limit is not imposed:

[local]Redback(config)#qos policy pos-rate pq

[local]Redback(config-policy-pq)#queue 0 rate 300000 burst 40000

[local]Redback(config-policy-pq)#exit

3.6.3   Backbone Application

In the following example, the PQ policy has eight priority queues, with DSCP values mapping into those eight queues toward the backbone (an 2.5-Gbps OC-48 uplink). Rate limits, listed in Table 15, are placed on the amount of traffic allowed into the backbone for each DSCP value.

Table 15    2.5-Gbps OC-48 Rate Limits

Queue Number

DSCP

Rate Limit

0

NA

None

1

NA

None

2

expedited forwarding (EF)

200 Mbps

3

assured forwarding (AF), level 4

200 Mbps

4

assured forwarding (AF), level 3

200 Mbps

5

assured forwarding (AF), level 2

200 Mbps

6

assured forwarding (AF), level 1

200 Mbps

7

default forwarding (DF)

None

The configuration is as follows:

[local]Redback(config)#qos policy Diffserv pq

[local]Redback(config-policy-pq)#num-queues 8

[local]Redback(config-policy-pq)#queue 2 rate 200000 burst 25000

[local]Redback(config-policy-pq)#queue 3 rate 200000 burst 25000

[local]Redback(config-policy-pq)#queue 4 rate 200000 burst 25000

[local]Redback(config-policy-pq)#queue 5 rate 200000 burst 25000

[local]Redback(config-policy-pq)#queue 6 rate 200000 burst 25000

3.7   PWFQ Policies

The following examples provide configurations for types of priority scheduling:

In these examples, all policies are configured with four queues, a queue map, qpmap1, a congestion avoidance map, map-red4p, and a maximum bandwidth of 50 Mbits (50000) for the policy; each of the four queues in the policy is assigned a priority and a relative weight, which specifies percentage of the available bandwidth within its priority group.

3.7.1   Strict Priority

The following example configures the strict PWFQ policy for strict priority scheduling. Each queue has a unique priority and the same relative weight:

[local]Redback(config)#qos policy strict pwfq

[local]Redback(config-policy-pwfq)#num-queues 4

[local]Redback(config-policy-pwfq)#queue-map qpmap1

[local]Redback(config-policy-pwfq)#congestion-map map-red4p

[local]Redback(config-policy-pwfq)#rate maximum 50000

[local]Redback(config-policy-pwfq)#queue 0 priority 0 weight 100

[local]Redback(config-policy-pwfq)#queue 1 priority 1 weight 100

[local]Redback(config-policy-pwfq)#queue 2 priority 2 weight 100

[local]Redback(config-policy-pwfq)#queue 3 priority 3 weight 100

[local]Redback(config-policy-pwfq)#exit

3.7.2   Normal Priority

The following example configures the normal PWFQ policy for normal priority scheduling. All queues have the same priority; scheduling is based on the relative weight assigned to each queue. In this example, queue 0 receives 50% of the available bandwidth (25 Mbits), queue 1 receives 30% (15 Mbits), queue 2 receives 20% (10 Mbits), and queue 3 receives 10% (5 Mbits):

[local]Redback(config)#qos policy normal pwfq

[local]Redback(config-policy-pwfq)#num-queues 4

[local]Redback(config-policy-pwfq)#queue-map qpmap1

[local]Redback(config-policy-pwfq)#congestion-map map-red4p

[local]Redback(config-policy-pwfq)#rate maximum 50000

[local]Redback(config-policy-pwfq)#queue 0 priority 0 weight 50

[local]Redback(config-policy-pwfq)#queue 1 priority 0 weight 30

[local]Redback(config-policy-pwfq)#queue 2 priority 0 weight 20

[local]Redback(config-policy-pwfq)#queue 3 priority 0 weight 10

[local]Redback(config-policy-pwfq)#exit

3.7.3   Strict + Normal Priority

The following example configures the PWFQ policy, pwfq4 with two priority groups, 0 and 1.

Queues 0 and 1 have the same priority (group 0) and will be serviced before queues 2 and 3 (assigned to group 1). Within each priority group the queues are serviced in round-robin order, according to their assigned relative weights. For example, queue 0 receives 70% and queue 1 receives 30% of the bandwidth available for the group. Queues 2 and 3 are serviced only when queues 0 and 1 are empty; queue 2 receives 60% and queue 3 receives 40% of the available bandwidth for the group:

[local]Redback(config)#qos policy pwfq4 pwfq

[local]Redback(config-policy-pwfq)#num-queues 4

[local]Redback(config-policy-pwfq)#queue-map qpmap1

[local]Redback(config-policy-pwfq)#congestion-map map-red4p

[local]Redback(config-policy-pwfq)#rate maximum 50000

[local]Redback(config-policy-pwfq)#queue 0 priority 0 weight 70

[local]Redback(config-policy-pwfq)#queue 1 priority 0 weight 30

[local]Redback(config-policy-pwfq)#queue 2 priority 1 weight 60

[local]Redback(config-policy-pwfq)#queue 3 priority 1 weight 40

[local]Redback(config-policy-pwfq)#exit

3.7.4   Strict + Normal Priority with Maximum Priority-Group Bandwidth

The following example configures the pwfq4 policy as before, but adds a maximum bandwidth limitation for each priority group. In this case, the combined traffic in group 0 is limited to 10 Mbits (10000), even when there is no traffic on the queues in priority group 1. Similarly, combined traffic on queues 2 and 3 is limited to 1 Mbit (1000), even when there is no traffic on queues 0 and 1:

[local]Redback(config)#qos policy pwfq4 pwfq

[local]Redback(config-policy-pwfq)#num-queues 4

[local]Redback(config-policy-pwfq)#queue-map qpmap1

[local]Redback(config-policy-pwfq)#congestion-map map-red4p

[local]Redback(config-policy-pwfq)#rate maximum 50000

[local]Redback(config-policy-pwfq)#queue 0 priority 0 weight 70

[local]Redback(config-policy-pwfq)#queue 1 priority 0 weight 30

[local]Redback(config-policy-pwfq)#queue priority-group 0 rate 10000

[local]Redback(config-policy-pwfq)#queue 2 priority 1 weight 60

[local]Redback(config-policy-pwfq)#queue 3 priority 1 weight 40

[local]Redback(config-policy-pwfq)#queue priority-group 1 rate 1000 

[local]Redback(config-policy-pwfq)#exit

3.7.5   Strict + Normal Priority with Maximum and Minimum Bandwidths

The following example configures the pwfq4 policy as before, but adds a minimum bandwidth limitation of 10 Mbits (10000) for the policy. In this configuration, the minimum bandwidth is guaranteed to the policy only if the next higher level of scheduling (for example, for the scheduling policy applied towards an 802.1Q PVC) is in strict priority mode. If it is not, then the minimum bandwidth is ignored:

[local]Redback(config)#qos policy pwfq4 pwfq

[local]Redback(config-policy-pwfq)#num-queues 4

[local]Redback(config-policy-pwfq)#queue-map qpmap1

[local]Redback(config-policy-pwfq)#congestion-map map-red4p

[local]Redback(config-policy-pwfq)#rate maximum 50000

[local]Redback(config-policy-pwfq)#rate minimum 10000

[local]Redback(config-policy-pwfq)#queue 0 priority 0 weight 70

[local]Redback(config-policy-pwfq)#queue 1 priority 0 weight 30

[local]Redback(config-policy-pwfq)#queue priority-group 0 rate 10000

[local]Redback(config-policy-pwfq)#queue 2 priority 1 weight 60

[local]Redback(config-policy-pwfq)#queue 3 priority 1 weight 40

[local]Redback(config-policy-pwfq)#queue priority-group 1 rate 1000 

[local]Redback(config-policy-pwfq)#exit

3.8   Overhead Profiles

The following example configures an overhead profile for example1, and sets the default rate factor to 15, a reserve value to 8, and the encapsulation type to pppoa-llc. After you set the overhead profile with default values, you configure adsl1 and vdsl1 with custom encapsulation and reserve values:

[local]Redback(config)#qos profile example1 overhead

[local]Redback(config-profile-overhead)#rate-factor 15

[local]Redback(config-profile-overhead)#encaps-access-line pppoa-llc

[local]Redback(config-profile-overhead)#reserved 8

[local]Redback(config-profile-overhead)#type adsl1

[local]Redback(config-type-overhead)#rate-factor 20

[local]Redback(config-type-overhead)#encaps-access-line pppoa-null

[local]Redback(config-type-overhead)#reserved 16

[local]Redback(config-type-overhead)#exit

[local]Redback(config-profile-overhead)#type vdsl1

[local]Redback(config-type-overhead)#encaps-access-line pppoa-null value 22 data-link ethernet

[local]Redback(config-type-overhead)#reserved 10

4   QoS Port Group Maps

The following example shows how to specify a user-defined QoS port map named abc for a ge3-4-portcard and then enter port group map configuration mode. In this mode, the example shows how to define two port groups for the ge3-4-port card type with two ports mapped to one of the port groups. After defining the abc QoS port map, the example shows how to apply this port group map to the ge3-4-port card in slot2. Note that when you enter qos port map ? the card configuration mode in this case, you have the abc QoS port map as an option:

[local]Redback(config)#qos port-map abc card-type ge3-4-port

[local]Redback(config-port-group-map)#group 1 ports 1 2

[local]Redback(config-port-group-map)#group 2 ports 3 4

[local]Redback(config-port-group-map)#end

[local]Redback(config)#card ge3-4-port 2

[local]Redback(config-card)#qos port-map ?

abc User-defined

fwd_max_perf Predefined map optimized for forwarding performance

tm_max_perf Default map optimized for TM performance

[local]Redback(config-card)#qos port-map abc

Note: if the card is locked the changes will be applied to the card on its next reload

[local]Redback(config-card)#end