Nokia SR Platform: FP4 and FP5 Buffering

Nokia's FP (Forwarding Plane) architecture in the 7750 SR platform uses a fully buffered design — packet buffering is deeply integrated into both ingress and egress pipelines across all generations.

Both FP4 and FP5:


FP4 Buffering

Buffer Size and Structure

Pre-buffer (front-end buffering)

Queue Model

Interface-Level Behavior

FP4 buffering enables:

FP4 introduced "intelligent aggregation": buffering allows oversubscription while maintaining SLA guarantees.


FP5 Buffering

Buffer Size and Pre-buffer

Architectural Enhancements over FP4

Improved buffer efficiency — better utilization of shared memory, reduced head-of-line blocking, more effective burst absorption at high speeds.

Tighter pre-buffer integration — closer coupling between pre-buffer, forwarding pipeline, and QoS scheduler.

Higher scale queueing — queue reallocation increments doubled to 16k (vs 8k in FP4).

Higher throughput — FP5 approximately doubles forwarding capacity over FP4: ~3 Tbps to ~6 Tbps per NPU generation.

Interface-Level Behavior

FP5 is optimized for:

FP5 does not significantly increase raw buffer size over FP4, but delivers higher effective buffer utilization, greater aggregation scale, and improved latency consistency under congestion.


FP4 vs FP5 Comparison

Feature FP4 FP5
Buffer size ~64 GB/LC 32–64 GB/LC
Pre-buffer Multi-million packets ~10.8M–21.6M packets
Buffer model Fully shared Fully shared (enhanced)
Queue realloc increment 8k 16k
Default queues 128k ingress / 128k egress 128k ingress / 128k egress
Pipeline integration Pre-buffer + forwarding Tighter integration
Throughput class ~3 Tbps ~6 Tbps
Target interfaces up to 400G 400G / 800G
Key strength Deterministic buffering Efficiency + scale

Core Architectural Concepts

Shared Buffer Pools

Memory is dynamically allocated across ports, queues, and services. This maximizes utilization under mixed workloads — a port with no traffic does not waste buffer that a congested port could use.

Pre-buffering

The first stage of packet handling absorbs bursts before classification. This protects high-priority traffic flows from drops during transient congestion spikes, even before QoS scheduling has acted.

QoS-Coupled Buffering

Buffers are tightly linked to queue scheduling and congestion avoidance (e.g., WRED). This enables deterministic forwarding behavior: the system can make drop decisions based on queue depth with high precision.

Queue Flexibility

Queues can be reallocated between ingress and egress. This does not impact total buffer pool size — it only changes how queue resources are divided.


See Also


References