SYSTEM ADMINISTRATOR GUIDE     56/1543-CRA 119 1170/1-V1 Uen A    

Configuring Scheduling

© Copyright Ericsson AB 2009. All rights reserved.

Disclaimer

No part of this document may be reproduced in any form without the written permission of the copyright owner. The contents of this document are subject to revision without notice due to continued progress in methodology, design and manufacturing. Ericsson shall have no liability for any error or damage of any kind resulting from the use of this document.

Trademark List

SmartEdge is a registered trademark of Telefonaktiebolaget L M Ericsson.

Contents

1Overview
1.1Queue Maps
1.2Priority Queuing Policies
1.3Enhanced Deficit Round-Robin Policies
1.4Modified Deficit Round-Robin Policies
1.5Priority Weighted Fair Queuing Policies
1.6Congestion Management and Avoidance
1.7Overhead Profiles
1.8Port Grouping for Traffic Scheduling

2

Configuration and Operations Tasks
2.1Configure a Queue Map
2.2Configure a Congestion Avoidance Map
2.3Configure an EDRR Policy
2.4Configure an MDRR Policy
2.5Configure a PQ Policy
2.6Configure a PWFQ Policy
2.7Configure User-Defined Port Group Map and Apply It to a Card
2.8Apply a Predefined or Default Port Group Map to a Card
2.9Operations Tasks

3

Configuration Examples
3.1Queue Maps
3.2EDRR Policy
3.3MDRR Policy
3.4PQ Policies
3.5PWFQ Policies

4

QoS Port Group Maps


1   Overview

This document provides an overview of the SM family chassis quality of service (QoS) scheduling policy features and describes the tasks used to configure, monitor, and administer these features. This document also provides examples of QoS scheduling policy configurations.

For information about other QoS configuration tasks and commands, see the following documents:

Note:  
In this document, the terms traffic-managed circuit and traffic-managed port refer to a circuit and port, respectively, on Fast Ethernet-Gigabit Ethernet (FE-GE), Gigabit Ethernet (GE), and 10 Gigabit Ethernet (10 GE) traffic cards.

QoS scheduling policies create and enforce levels of service and bandwidth rates, and prioritize how packets are scheduled into egress queues. Incoming queues on outbound traffic cards have associated scheduling parameters such as rates, depths, and relative weights. The traffic card’s scheduler draws packets from the incoming queues based on weight, rate, or strict priority:

After classification, marking, and rate-limiting occurs on an incoming packet, the packet is placed into an output queue for servicing by an egress traffic card’s scheduler. The SM family chassis supports up to eight queues per circuit. Queues are serviced according to a queue map scheme, a QoS scheduling policy, or both, as described in the following sections.

1.1   Queue Maps

By default, the SM family chassis assigns a priority group number to an egress queue, according to the number of queues configured on a circuit; see Table 1.

Table 1    Default Mapping of Packets into Queues Using Priority Groups

Priority Group

DSCP Value

IP Prec

MPLS EXP

802.1p

8 Queues

4 Queues

2 Queues

1 Queue

0

Network control

7

7

7

Queue 0

Queue 0

Queue 0

Queue 0

1

Reserved

6

6

6

Queue 1

Queue 1

Queue 1

Queue 0

2

Expedited Forwarding (EF)

5

5

5

Queue 2

Queue 1

Queue 1

Queue 0

3

Assured Forwarding (AF) level 4

4

4

4

Queue 3

Queue 2

Queue 1

Queue 0

4

AF level 3

3

3

3

Queue 4

Queue 2

Queue 1

Queue 0

5

AF level 2

2

2

2

Queue 5

Queue 2

Queue 1

Queue 0

6

AF level 1

1

1

1

Queue 6

Queue 2

Queue 1

Queue 0

7

Default Forwarding (DF)

0

0

0

Queue 7

Queue 3

Queue 1

Queue 0

You can configure a customized queue map and assign it to any scheduling policy. The map overrides the default mapping of packets into the egress queues of the policy to which it is assigned; see Figure 1. When the scheduling policy is attached to a circuit, it overrides the default queue map.

Figure 1   Queue Map (633)

1.2   Priority Queuing Policies

When a priority queuing (PQ) policy is enabled on a circuit, its output queues are serviced in strict priority order; that is, packets waiting in the highest-priority queue (queue 0) are serviced until that queue is empty, then packets waiting in the second-highest priority queue are serviced (queue 1), and so on. Under congestion, a PQ policy allows the highest priority traffic to get through, at the expense of lower-priority traffic.

With a PQ policy, the potential exists for a high volume of high-priority traffic to completely starve low-priority traffic. To prevent such starvation, the SM family chassis allows a rate limit to be configured on each queue, which limits the amount of bandwidth available to a high priority queue. With careful tuning of the rate limits, you can prevent the lower priority queues from being starved.

1.3   Enhanced Deficit Round-Robin Policies

Enhanced deficit round-robin (EDRR) policies can operate in one of three modes: normal, strict, or alternate:

With EDRR policies, each queue has an associated quantum value and a deficit counter. The quantum value is derived from the configured weight of the queue. A quantum value is the average number of bytes served in each round; the deficit counter is initialized to the quantum value. Packets in a queue are served as long as the deficit counter is greater than zero. Each packet served decreases the deficit counter by a value equal to its length in bytes. At each new round, each nonempty queue’s deficit counter is incremented by its quantum value; see Figure 2.

Figure 2   EDRR Strict Mode Scheduling (629)

1.4   Modified Deficit Round-Robin Policies

Modified deficit round-robin (MDRR) policies support the following features:

MDDR policies apply to ports that are members of link groups.

For information about EDRR scheduling modes, see Enhanced Deficit Round-Robin Policies; for information about PQ scheduling, see Priority Queuing Policies.

When you configure PDRR policies, keep the following limitations in mind:

1.5   Priority Weighted Fair Queuing Policies

Priority weighted fair queuing (PWFQ) policies use a priority- and a weight-based algorithm to implement hierarchical QoS-aware scheduling. Each queue in the policy includes both a priority and a relative weight, which control how each queue is serviced. Inside the PWFQ policy, priority takes precedence, and for queues placed at the same priority, the individual configured weight defines how the queue is used in the scheduling decision. You can attach PWFQ policies to Layer 2 and Layer 3 circuits.

Hierarchical scheduling enables scheduling at the port, 802.1Q tunnel, and 802.1Q permanent virtual circuit (PVC) levels, using PWFQ policies. It also enables QoS shaping for subscriber sessions using PWFQ policies attached to hierarchical nodes and node groups, so that four levels of scheduling are possible (hierarchical node, 802.1Q PVC, 802.1Q tunnel, and port levels). Scheduling modes include:

1.6   Congestion Management and Avoidance

The SM family chassis employs the following congestion avoidance features when processing packets using the different queuing and scheduling policies.

1.6.1   Random Early Detection

With scheduling policies, you can configure random early detection (RED) parameters to manage buffer congestion by signalling to sources of traffic that the network is on the verge of entering a congested state, rather than waiting until the network is actually congested. The technique is to drop packets with a probability that varies as a function of how many packets are waiting in a queue at any particular time, and the minimum and maximum average queue depth.

When a queue is nearly empty, the probability of dropping a packet is small. As the queue’s average depth increases, the likelihood of dropping packets becomes greater; see Figure 3.

Figure 3   Probability of Being Dropped as a Function of Queue Depth (557)

1.6.2   Multidrop Precedence

With ATMWFQ and PWFQ policies, you can configure different congestion behaviors that depend on the DSCP values of the packets in a queue; this feature is referred to as multidrop precedence. Multidrop precedence supports up to three profiles for each queue, and each profile defines a different congestion behavior for one or more DSCP values. Each profile is also characterized by its RED parameter values. The DSCP value in the packet is used to select the profile that governs its congestion avoidance behavior.

Figure 4 shows how the three profiles can be defined with different minimum and maximum thresholds. Multidrop profiles are available only for ATMWFQ and PWFQ policies and are configured using congestion avoidance maps.

Figure 4   Multidrop Profiles (852)

1.6.3   Congestion Avoidance Maps

A congestion avoidance map specifies how congestion avoidance is managed for a set of queues. Each map supports eight queues.

Note:  
Congestion avoidance maps are supported only for ATMWFQ, MDRR, and PWFQ policies.

For each queue, you define up to three profiles, each of which describes the congestion behavior for one or more DSCP values. The map specifies RED parameters for every queue. One of the profiles, the default profile, specifies the default congestion behavior for every DSCP value.

When you define either of the other profiles for a queue, the system removes the DSCP values that you specify from the default profile. If a congestion map is not assigned to an ATMWFQ, MDRR, or PWFQ policy, packets are dropped only when the maximum queue depth is exceeded.

1.6.4   Queue Depth

With EDRR, PQ, and PWFQ policies, you can modify the number of packets allowed per queue on a circuit. Queue depth is configured for PWFQ policies with the congestion avoidance map that you assign to the policy and for EDRR and PQ policies with the queue depth command (in EDRR and PQ policy configuration mode). For default and maximum queue depth values for various port types, see Queue Depth Values by Port Type.

1.6.5   Queue Rates

With EDRR, MDRR, and PQ policies, you can configure a rate limit. In PQ policies, the rate is controlled on each individual queue through the queue rate command (in PQ policy configuration mode). In EDRR and MDRR policies, the rate is a combined traffic rate for all queues in the policy and is configured through the rate command (in EDRR policy and MDRR policy configuration modes, respectively). A reasonable guideline for burst tolerance is to allow one to two seconds of burst time on the defined queue rate.

1.7   Overhead Profiles

The SM family chassis can take the encapsulation overhead of the access line into consideration so that the rate of traffic does not exceed the permitted traffic rate on the line. This downstream traffic shaping is controlled by QoS overhead profiles.

The overhead profile works in conjunction with the PWFQ policy. The PWFQ defines the rate of traffic flow; the overhead profile defines the encapsulation overhead and the available bandwidth on the access line. The rate can come from one of the following sources:

1.8   Port Grouping for Traffic Scheduling

You can assign the ports of a traffic card that supports traffic management into different groups to customize the performance of traffic scheduling. These groups are referred to as scheduling port groups or simply port groups.

The ports within a port group share scheduling capacity within the group. For example, if one port is transmitting large packets and another is transmitting small packets, the port transmitting small packets, which requires more scheduling processing, can borrow capacity that is not needed by the port transmitting larger packets.

Port grouping allows you to manage the balance between scheduling performance and forwarding performance on a card. Each port group in use consumes processing capacity that would otherwise be available for packet forwarding. Defining more port groups results in higher scheduling performance but lower forwarding performance.

Each port map defined must be associated with a particular card type, and can only be referenced by cards of that type. Each port of the card must be assigned to one and only one port group.

The following list shows an example port map with five port groups for a GE 10-port card; each port group maps to two ports:

The SM family chassis supports the following types of port group maps:

A port group map that is currently referenced by one or more cards may not be modified. You must remove all card configuration references to a particular port map before modifying it.

A configuration of a card can be modified to reference a different port-map or revert to the default port-map. However, such a port-map change is applied immediately only if that card is unlocked. A card is considered to be locked for port group map purposes if any PWFQ or other traffic management configuration is currently applied to any of the ports of the card. If the card is locked, then it must be reinitialized by using the reload card command for the port map change to take effect. The show qos port-map bind command can be used to determine whether a card is currently locked or unlocked for this purpose.


 Warning! 
Using the reload card command does result in the temporary loss of all traffic carried by the port.

The SM family chassis supports a maximum of eight port groups per card and a maximum of 64 ports for each card. The actual number of port groups and ports supported on a given card depends on the card type. The following cards support traffic management:

2   Configuration and Operations Tasks

To configure scheduling policies, perform the tasks described in the following sections.

Note:  
In this section, the command syntax in the task tables displays only the root command; for the complete command syntax, see Command List.

2.1   Configure a Queue Map

The SM family chassis assigns a factory preset, or default, mapping of priority groups to queues, according to the number of queues configured. You can customize this mapping for the circuits to which any QoS scheduling policy is attached. To configure a queue map, perform the tasks in Table 2.

Table 2    Configure a Queue Map

Task

Root Command

Notes

Create or select a queue map and access queue map configuration mode.

qos queue-map

Enter this command in global configuration mode.

Specify the number of queues for the queue map and access num-queues configuration mode.(1)

num-queues

Enter this command in queue map configuration mode.

Customize the mapping of priority groups to queues.

queue priority

Enter this command in num-queues configuration mode.

(1)  For information about the correlation between the number of ATMWFQ queues configured on a particular traffic card type and the corresponding number of PVCs allowed (per port and per traffic card), see Configuring Circuits.

2.2   Configure a Congestion Avoidance Map

By default, the SM family chassis drops packets at the end of the queue when the number of packets exceeds the configured maximum depth of the queue. A congestion avoidance map, when attached to an ATMWFQ, MDRR, or PWFQ scheduling policy, provides congestion management behavior for each queue defined by the policy.

To configure a congestion avoidance map, perform the tasks described in Table 3; enter all commands in congestion map configuration mode, unless otherwise noted.

Table 3    Configure a Congestion Avoidance Map

Task

Root Command

Notes

Create or select a congestion avoidance map and access congestion map configuration mode.

qos congestion-avoidance-map

Enter this command in global configuration mode.

Set the RED parameters for each queue in the map.

queue red

Perform this task for each queue in the map.

Set the exponential-weight for each queue in the map.

queue exponential-weight

Enter this command for each queue in the map.

Specify the depth of a queue.

queue depth

This command applies only to congestion avoidance maps for PWFQ policies only.

Enter this command for each queue in the map.

2.3   Configure an EDRR Policy

To configure an EDRR policy, perform the tasks described in Table 4; enter all commands in EDRR policy configuration mode, unless otherwise noted.

Table 4    Configure an EDRR Policy

Task

Root Command

Notes

Create the policy name and access EDRR policy configuration mode.

qos policy edrr

Enter this command in global configuration mode.

Optional. Configure the policy with any or all of the following tasks:

   

Assign a queue map to the policy.

queue-map

 

Specify the number of queues for the policy.

num-queues

By default, the number of queues is 8.

Specify the depth of a queue.

queue depth

You can enter this command for each queue.

Set RED parameters per queue.

queue red

By default, RED is disabled.

Specify the traffic weight per queue.

queue weight

By default, the traffic weight is 0.

Set a rate limit for the policy.

rate

By default, there is no rate limit.

2.4   Configure an MDRR Policy

To configure an MDRR policy, perform the tasks described in Table 5; enter all commands in MDRR policy configuration mode, unless otherwise noted.

Table 5    Configure an MDRR Policy

Task

Root Command

Notes

Create the policy name and access MDRR policy configuration mode.

qos policy mdrr

Enter this command in global configuration mode.

Optional. Configure the policy by completing any or all of the following tasks:

   

Assign a queue map to the policy.

queue-map

 

Specify the number of queues for the policy.

num-queues

By default, the number of queues is 8.

Assign a congestion avoidance map to the policy.

congestion-map

 

Specify the scheduling algorithm.

qos mode (MDRR)

By default, the mode is normal.

Specify the traffic weight per queue.

queue weight

By default, the traffic weight is 0.

Set a rate limit for the policy.

rate

By default, there is no rate limit.

2.5   Configure a PQ Policy

To configure a PQ policy, perform the tasks described in Table 6; enter all commands in PQ policy configuration mode, unless otherwise noted.

Table 6    Configure a PQ Policy

Task

Root Command

Notes

Create or select the policy and access PQ policy configuration mode.

qos policy pq

Enter this command in global configuration mode.

Optional. Configure the policy with any or all of the following tasks:

 

Enter these commands in PQ policy configuration mode.

Assign a queue map to the policy.

queue-map

 

Specify the number of queues for the policy.

num-queues

By default, the number of queues is 8.

Specify the depth of a queue.

queue depth

You can enter this command for each queue.

Set a rate limit per queue.

queue rate

By default, there is no rate limit.

Set RED parameters per queue.

queue red

By default, RED is disabled.

2.6   Configure a PWFQ Policy

To configure a PWFQ policy, perform the tasks described in Table 7; enter all commands in PWFQ policy configuration mode, unless otherwise noted.

Table 7    Configure a PWFQ Policy

Task

Root Command

Notes

Create the policy name and access PWFQ policy configuration mode.

qos policy pwfq

Enter this command in global configuration mode.

Optional. Configure the policy with any or all of the following tasks:

   

Assign a queue map to the policy.

queue-map

 

Specify the number of queues for the policy.

num-queues

By default, the number of queues is 8.

Assign a congestion avoidance map to the policy.

congestion-map

 

Assign a priority and relative weight to each queue.

queue priority

Enter this command for each queue that you specified with the num-queues command.

Set the maximum and minimum rates for the policy.

rate

You must enter this command to specify the maximum rate; the minimum rate is optional. You cannot set a minimum rate if you also assign a relative weight to this policy.

Assign a relative weight to this policy.

weight

You cannot assign a relative weight if you also set a minimum rate for this policy.

Set the rate for each priority group.

queue priority-group

Enter this command for each priority group.

2.7   Configure User-Defined Port Group Map and Apply It to a Card

To configure a user-defined port group map and then apply it to a traffic card that supports port groups, perform the tasks described in Table 8; enter all commands in the specified configuration mode.

Table 8    Configure a User-Defined Port Group Map and Apply It to a Card

Task

Root Command

Notes

Define the name of a port group map for a specified traffic card and enter port group map configuration mode.

qos port-map (global)

Enter this command in global configuration mode.

Define a port group.

group

Enter this command in port group map configuration mode.

Apply the port group map you defined to the card you are configuring.

qos port-map (card)

Enter this command in card configuration mode. Specify the name of the user-defined port group map to apply to the card. The name is displayed as an option. The application of the port group map takes effect after a card reload.

2.8   Apply a Predefined or Default Port Group Map to a Card

To apply a predefined, or default port group map to a traffic card that supports port groups, perform the task described in Table 9; enter the command in the specified configuration mode.

Table 9    Apply a Predefined or Default Port Group Map to a Card

Task

Command

Notes

Apply a port group map to the card you are configuring.

qos port-map (card)

Enter this command in card configuration mode. Specify the name of the predefined or default port group map to apply to the card. The application of the port group map takes effect after a card reload. If no specified port group map is applied, the default port group map is applied.

2.9   Operations Tasks

To monitor and administer QoS scheduling features, perform the appropriate tasks described in Table 10. Enter the debug command in exec mode; enter the show commands in any mode.

Table 10    Monitor and Administer QoS Features

Task

Command

Display the queue assignments for a QoS congestion avoidance map.

show qos congestion-map

Display information about one or more QoS ATMWFQ policies.

show qos policy atmwfq

Display information about one or more QoS EDRR policies.

show qos policy edrr

Display information about one or more QoS PQ policies.

show qos policy pq

Display information about one or more QoS PWFQ policies.

show qos policy pwfq

Display information about one or more configured QoS queue maps.

show qos queue-map

Displays information about a specific QoS port group map or all QoS port group maps, for a specific or all traffic card types that support port groups.

show qos port-map

Displays information about the QoS port group map binding for a traffic card in a specific slot or for all configured traffic cards that support port groups.

show qos port-map bind

3   Configuration Examples

The following sections provide examples of QoS scheduling configurations.

3.1   Queue Maps

The following example creates three queue maps and assigns a custom mapping of priority groups to queues, based on the number of queues configured:

[local]Redback(config)#qos queue-map Custom2

[local]Redback(config-queue-map)#num-queues 2

[local]Redback(config-num-queues)#queue 0 priority 0

[local]Redback(config-num-queues)#queue 1 priority 1 2 3 4 5 6 7

[local]Redback(config-num-queues)#exit



[local]Redback(config)#qos queue-map Custom4

[local]Redback(config-queue-map)#num-queues 4

[local]Redback(config-num-queues)#queue 0 priority 0

[local]Redback(config-num-queues)#queue 1 priority 1 2

[local]Redback(config-num-queues)#queue 2 priority 3 4 5 6

[local]Redback(config-num-queues)#queue 3 priority 7

[local]Redback(config-num-queues)#exit



[local]Redback(config)#qos queue-map Custom8

[local]Redback(config-queue-map)#num-queues 8

[local]Redback(config-num-queues)#queue 0 priority 0

[local]Redback(config-num-queues)#queue 1 priority 1

[local]Redback(config-num-queues)#queue 2 priority 2

[local]Redback(config-num-queues)#queue 3 priority 3

[local]Redback(config-num-queues)#queue 4 priority 4

[local]Redback(config-num-queues)#queue 5 priority 5

[local]Redback(config-num-queues)#queue 6 priority 6

[local]Redback(config-num-queues)#queue 7 priority 7

[local]Redback(config-num-queues)#exit

3.2   EDRR Policy

The following example configures the EDRR policy, example1, and gives queue number 3 30% of the bandwidth of the circuit:

[local]Redback(config)#qos policy example1 edrr

[local]Redback(config-policy-edrr)#queue 3 weight 30

[local]Redback(config-policy-edrr)#exit

3.3   MDRR Policy

The following example configures the MDRR policy, example4, using strict mode with 4 queues and divides the bandwidth between the queues according to an approximate 50:30:10:10 ratio during periods of congestion:

[local]Redback(config)#qos policy example4 mdrr

[local]Redback(config-policy-mdrr)#qos mode strict

[local]Redback(config-policy-mdrr)#num-queues 4

[local]Redback(config-policy-mdrr)#queue-map Custom4

[local]Redback(config-policy-mdrr)#congestion-avoidance-map 

[local]Redback(config-policy-mdrr)#queue 0 rate 310000 burst 40000

[local]Redback(config-policy-mdrr)#queue 1 rate 186000 burst 40000

[local]Redback(config-policy-mdrr)#queue 2 rate 62000 burst 40000

[local]Redback(config-policy-mdrr)#queue 3 rate 62000 burst 40000

[local]Redback(config-policy-mdrr)#exit

3.4   PQ Policies

The following sections provide examples of PQ policies:

3.4.1   RED Parameters

The following example creates a PQ policy, red, and establishes RED parameters for each of the eight queues such that higher priority traffic has a lower probability of being dropped, and lower priority traffic has a higher probability of being dropped:

[local]Redback(config)#qos policy red pq

[local]Redback(config-policy-pq)#queue 0 red probability 10 weight 12 min-threshold 1900 max-threshold 5200

[local]Redback(config-policy-pq)#queue 1 red probability 9 weight 12 min-threshold 1850 max-threshold 5200

[local]Redback(config-policy-pq)#queue 2 red probability 8 weight 12 min-threshold 1800 max-threshold 5200

[local]Redback(config-policy-pq)#queue 3 red probability 7 weight 12 min-threshold 1750 max-threshold 5200

[local]Redback(config-policy-pq)#queue 4 red probability 6 weight 12 min-threshold 1700 max-threshold 5200

[local]Redback(config-policy-pq)#queue 5 red probability 5 weight 12 min-threshold 1650 max-threshold 5200

[local]Redback(config-policy-pq)#queue 6 red probability 4 weight 12 min-threshold 1600 max-threshold 5200

[local]Redback(config-policy-pq)#queue 7 red probability 1 weight 12 min-threshold 1550 max-threshold 5200

[local]Redback(config-policy-pq)#exit

3.4.2   Rate-Limiting

The following example configures a PQ policy with 4 queues and divides the bandwidth between the queues according to an approximate 50:30:10:10 ratio during periods of congestion. This guarantees that even the lowest priority queue gets a share of bandwidth in the presence of congestion and strict priority queuing:

[local]Redback(config)#qos policy pos-qos pq

[local]Redback(config-policy-pq)#num-queues 4

[local]Redback(config-policy-pq)#queue 0 rate 310000 burst 40000

[local]Redback(config-policy-pq)#queue 1 rate 130000 burst 40000

[local]Redback(config-policy-pq)#queue 2 rate 62000 burst 40000

[local]Redback(config-policy-pq)#queue 3 rate 62000 burst 40000

[local]Redback(config-policy-pq)#exit 

The following example uses rate-limiting to provide a customer with an access bandwidth that is less than the port speed; this is accomplished through the no-exceed keyword in the queue 0 rate command. The port is on an OC-12c/STM-14c traffic card and is configured to a maximum of 100 Mbps (instead of its port speed of 622 Mbps):

[local]Redback(config)#qos policy 100MbpsMaxBw pq

[local]Redback(config-policy-pq)#num-queues 1

[local]Redback(config-policy-pq)#queue 0 rate 100000 burst 12500 no-exceed

[local]Redback(config-policy-pq)#exit

The following example creates a policy, pos-rate, and rate-limits traffic in queue 0 to 300 Mbps when there is congestion on the port. When there is no congestion on the port, the limit is not imposed:

[local]Redback(config)#qos policy pos-rate pq

[local]Redback(config-policy-pq)#queue 0 rate 300000 burst 40000

[local]Redback(config-policy-pq)#exit

3.4.3   Backbone Application

In the following example, the PQ policy has eight priority queues, with DSCP values mapping into those eight queues toward the backbone (an 2.5-Gbps OC-48 uplink). Strict rate limits, listed in Table 11, are placed on the amount of traffic allowed into the backbone for each DSCP value.

Table 11    2.5-Gbps OC-48 Rate Limits

Queue Number

DSCP

Rate Limit

0

NA

None

1

NA

None

2

expedited forwarding (EF)

200 Mbps

3

assured forwarding (AF), level 4

200 Mbps

4

assured forwarding (AF), level 3

200 Mbps

5

assured forwarding (AF), level 2

200 Mbps

6

assured forwarding (AF), level 1

200 Mbps

7

default forwarding (DF)

None

The configuration is as follows:

[local]Redback(config)#qos policy Diffserv pq

[local]Redback(config-policy-pq)#num-queues 8

[local]Redback(config-policy-pq)#queue 2 rate 200000 burst 25000 no-exceed

[local]Redback(config-policy-pq)#queue 3 rate 200000 burst 25000 no-exceed

[local]Redback(config-policy-pq)#queue 4 rate 200000 burst 25000 no-exceed

[local]Redback(config-policy-pq)#queue 5 rate 200000 burst 25000 no-exceed

[local]Redback(config-policy-pq)#queue 6 rate 200000 burst 25000 no-exceed

3.5   PWFQ Policies

The following examples provide configurations for types of priority scheduling:

In these examples, all policies are configured with four queues, a queue map, qpmap1, a congestion avoidance map, map-red4p, and a maximum bandwidth of 50 Mbits (50000) for the policy; each of the four queues in the policy is assigned a priority and a relative weight, which specifies percentage of the available bandwidth within its priority group.

3.5.1   Strict Priority

The following example configures the strict PWFQ policy for strict priority scheduling. Each queue has a unique priority and the same relative weight:

[local]Redback(config)#qos policy strict pwfq

[local]Redback(config-policy-pwfq)#num-queues 4

[local]Redback(config-policy-pwfq)#queue-map qpmap1

[local]Redback(config-policy-pwfq)#congestion-map map-red4p

[local]Redback(config-policy-pwfq)#rate maximum 50000

[local]Redback(config-policy-pwfq)#queue 0 priority 0 weight 100

[local]Redback(config-policy-pwfq)#queue 1 priority 1 weight 100

[local]Redback(config-policy-pwfq)#queue 2 priority 2 weight 100

[local]Redback(config-policy-pwfq)#queue 3 priority 3 weight 100

[local]Redback(config-policy-pwfq)#exit

3.5.2   Normal Priority

The following example configures the normal PWFQ policy for normal priority scheduling. All queues have the same priority; scheduling is based on the relative weight assigned to each queue. In this example, queue 0 receives 50% of the available bandwidth (25 Mbits), queue 1 receives 30% (15 Mbits), queue 2 receives 20% (10 Mbits), and queue 3 receives 10% (5 Mbits):

[local]Redback(config)#qos policy normal pwfq

[local]Redback(config-policy-pwfq)#num-queues 4

[local]Redback(config-policy-pwfq)#queue-map qpmap1

[local]Redback(config-policy-pwfq)#congestion-map map-red4p

[local]Redback(config-policy-pwfq)#rate maximum 50000

[local]Redback(config-policy-pwfq)#queue 0 priority 0 weight 50

[local]Redback(config-policy-pwfq)#queue 1 priority 0 weight 30

[local]Redback(config-policy-pwfq)#queue 2 priority 0 weight 20

[local]Redback(config-policy-pwfq)#queue 3 priority 0 weight 10

[local]Redback(config-policy-pwfq)#exit

3.5.3   Strict + Normal Priority

The following example configures the PWFQ policy, pwfq4 with two priority groups, 0 and 1.

Queues 0 and 1 have the same priority (group 0) and will be serviced before queues 2 and 3 (assigned to group 1). Within each priority group the queues are serviced in round-robin order, according to their assigned relative weights. For example, queue 0 receives 70% and queue 1 receives 30% of the bandwidth available for the group. Queues 2 and 3 are serviced only when queues 0 and 1 are empty; queue 2 receives 60% and queue 3 receives 40% of the available bandwidth for the group:

[local]Redback(config)#qos policy pwfq4 pwfq

[local]Redback(config-policy-pwfq)#num-queues 4

[local]Redback(config-policy-pwfq)#queue-map qpmap1

[local]Redback(config-policy-pwfq)#congestion-map map-red4p

[local]Redback(config-policy-pwfq)#rate maximum 50000

[local]Redback(config-policy-pwfq)#queue 0 priority 0 weight 70

[local]Redback(config-policy-pwfq)#queue 1 priority 0 weight 30

[local]Redback(config-policy-pwfq)#queue 2 priority 1 weight 60

[local]Redback(config-policy-pwfq)#queue 3 priority 1 weight 40

[local]Redback(config-policy-pwfq)#exit

3.5.4   Strict + Normal Priority with Maximum Priority-Group Bandwidth

The following example configures the pwfq4 policy as before, but adds a maximum bandwidth limitation for each priority group. In this case, the combined traffic in group 0 is limited to 10 Mbits (10000), even when there is no traffic on the queues in priority group 1. Similarly, combined traffic on queues 2 and 3 is limited to 1 Mbit (1000), even when there is no traffic on queues 0 and 1:

[local]Redback(config)#qos policy pwfq4 pwfq

[local]Redback(config-policy-pwfq)#num-queues 4

[local]Redback(config-policy-pwfq)#queue-map qpmap1

[local]Redback(config-policy-pwfq)#congestion-map map-red4p

[local]Redback(config-policy-pwfq)#rate maximum 50000

[local]Redback(config-policy-pwfq)#queue 0 priority 0 weight 70

[local]Redback(config-policy-pwfq)#queue 1 priority 0 weight 30

[local]Redback(config-policy-pwfq)#queue priority-group 0 rate 10000

[local]Redback(config-policy-pwfq)#queue 2 priority 1 weight 60

[local]Redback(config-policy-pwfq)#queue 3 priority 1 weight 40

[local]Redback(config-policy-pwfq)#queue priority-group 1 rate 1000 

[local]Redback(config-policy-pwfq)#exit

3.5.5   Strict + Normal Priority with Maximum and Minimum Bandwidths

The following example configures the pwfq4 policy as before, but adds a minimum bandwidth limitation of 10 Mbits (10000) for the policy. In this configuration, the minimum bandwidth is guaranteed to the policy only if the next higher level of scheduling (for example, for the scheduling policy applied towards an 802.1Q PVC) is in strict priority mode. If it is not, then the minimum bandwidth is ignored:

[local]Redback(config)#qos policy pwfq4 pwfq

[local]Redback(config-policy-pwfq)#num-queues 4

[local]Redback(config-policy-pwfq)#queue-map qpmap1

[local]Redback(config-policy-pwfq)#congestion-map map-red4p

[local]Redback(config-policy-pwfq)#rate maximum 50000

[local]Redback(config-policy-pwfq)#rate minimum 10000

[local]Redback(config-policy-pwfq)#queue 0 priority 0 weight 70

[local]Redback(config-policy-pwfq)#queue 1 priority 0 weight 30

[local]Redback(config-policy-pwfq)#queue priority-group 0 rate 10000

[local]Redback(config-policy-pwfq)#queue 2 priority 1 weight 60

[local]Redback(config-policy-pwfq)#queue 3 priority 1 weight 40

[local]Redback(config-policy-pwfq)#queue priority-group 1 rate 1000 

[local]Redback(config-policy-pwfq)#exit

4   QoS Port Group Maps

The following example shows how to specify a user-defined QoS port map named abcfor a 10ge4-port-sm card 10ge-4-port-sm and then enter port group map configuration mode. In this mode, the example shows how to define two port groups for the 10ge4-port-sm card type with two ports mapped to one of the port groups. After defining the abc QoS port map, the example shows how to apply this port group map to the 10ge4-port-smcard in slot 2. Note that when you enterqos port map ?the card configuration mode in this case, you have the abc QoS port map as an option:

[local]Redback(config)#qos port-map abc card-type 10ge4-port-sm

[local]Redback(config-port-group-map)#group 1 ports 1 2

[local]Redback(config-port-group-map)#group 2 ports 3 4

[local]Redback(config-port-group-map)#end

[local]Redback(config)#card ge3-4-port 2

[local]Redback(config-card)#qos port-map ?

abc User-defined

fwd_max_perf Predefined map optimized for forwarding performance

tm_max_perf Default map optimized for TM performance

[local]Redback(config-card)#qos port-map abc

Note: if the card is locked the changes will be applied to the card on its next reload

[local]Redback(config-card)#end