Search
Home Search Center IP Encyclopedia Online Courses

What Is Data Center Bridging (DCB)?

Data Center Bridging (DCB) is a suite of technologies designed to optimize Ethernet in order to support high-performance computing and storage networks. By reducing packet loss and lowering latency, it improves network efficiency and reliability to meet the high-bandwidth, low-latency communication requirements within data centers. Key DCB technologies include Priority-based Flow Control (PFC), Enhanced Transmission Selection (ETS), and Data Center Bridging Exchange Protocol (DCBX).

Why Do We Need DCB?

On a converged data center network, storage area network (SAN) traffic, inter-process communication (IPC) traffic, and local area network (LAN) traffic have different quality of service (QoS) requirements:

  • SAN traffic is sensitive to packet loss and requires in-order delivery of packets.
  • IPC traffic is exchanged between servers and requires low latency.
  • LAN traffic allows some packet loss and is delivered on a best-effort (BE) basis.

A converged network also has high requirements for link sharing. This, in addition to the differentiated QoS requirements described above, cannot be met by common Ethernet.

In response to these shortfalls of common Ethernet, IEEE 802.1 defines DCB, a set of enhancements to Ethernet for use in data center environments. DCB is used to build lossless Ethernet, meeting QoS requirements on converged data center networks.

Key Technologies of DCB

DCB integrates multiple protocols and mechanisms to optimize traditional Ethernet, addressing issues of packet loss and latency in high-performance computing and storage networks. Its core technologies include Priority-based Flow Control (PFC), Enhanced Transmission Selection (ETS), and Data Center Bridging Exchange Protocol (DCBX).

PFC

PFC is an enhancement to the Ethernet Pause mechanism. As shown in the following figure, eight priority queues on the transmit interface of DeviceA correspond to eight receive buffers on the receive interface of DeviceB. When a receive buffer on DeviceB is congested, DeviceB sends a backpressure signal "STOP" to DeviceA, requesting DeviceA to stop sending traffic in the corresponding priority queue.

PFC addresses the conflict between the Ethernet Pause mechanisms and link sharing. It controls traffic only in one or several priority queues of an interface, rather than on the entire interface. What's more, PFC can pause or restart any queue, without interrupting traffic in other queues. This feature enables traffic of various types to share one link.

PFC working mechanism
PFC working mechanism

ETS

ETS implements more flexible QoS through hierarchical scheduling.

ETS provides two-level scheduling: based on priority groups (PGs) and based on priority queues (PQs), as shown in the following figure. A port first performs level-1 scheduling for PGs and then performs level-2 scheduling for PQs in the PGs.
ETS process
ETS process

In contrast with common QoS, ETS provides scheduling based on PGs. ETS adds traffic of the same type to a PG, ensuring traffic of the same type obtains the same CoS.

DCBX

DCBX is a link discovery protocol that enables devices at both ends of a link to discover and exchange DCB configurations, reducing the manual configuration workloads of network administrators. DCBX provides the following functions:
  • Discovers the DCB configuration of the remote device.
  • Detects the DCB configuration errors of the remote device.
  • Configures DCB parameters of the remote device.

DCBX encapsulates DCB configurations (including PFC and ETS PG information) into Link Layer Discovery Protocol (LLDP) TLVs so that devices at both ends of a link can exchange DCB configurations.

About This Topic
  • Author: Gao Yangyang
  • Updated on: 2026-02-14
  • Views: 280
  • Average rating:
Share link to