Traffic Mirroring using Optical Taps

An optical tap is a simple, passive device inserted into a fiber-optic network link. It splits the light signal, forwarding most of the signal’s optical power in one direction (the main path) and a small portion of it to a separate monitoring port. Because the tap is passive, it introduces minimal latency—largely negligible in most practical scenarios. Essentially, it takes a fraction of the light to send to the monitoring port without actively modifying or buffering the packets.

Key benefits:

  • Very low latency.

  • Doesn’t introduce a single point of failure for the link since it’s a passive device.

  • Doesn’t depend on network switch capacity or configuration.

Traffic Mirroring using Port Mirroring or SPAN

Port mirroring (also known as SPAN (Switched Port Analyzer) on Cisco devices and RSPAN/ERSPAN for remote capturing) is a feature on most managed switches. It copies the traffic from one or more switch ports (or VLANs) to a designated monitoring port. Tools such as intrusion detection systems or packet capture appliances can be connected to this monitoring port to receive a real-time feed of the mirrored traffic.

Key benefits:

  • Flexible: You can decide which ports or VLANs to monitor.

  • No need to physically insert a tap in the cable.

Potential considerations:

  • The switch CPU handles the mirroring, so it can be resource-intensive on some devices.

  • In heavily loaded links, mirrored traffic may be dropped if the switch is oversubscribed.

Beeks has seen that Arista and Cisco switches are the most reliable mainstream switches in the market that are capable of mirroring packets. The Cisco switch is recommended to mirror packets in SPAN mode, as ERSPAN mode is inefficient and can introduce timing problems.

Traffic Mirroring from Server Network Cards

In addition to taps and switch-based methods, some environments use traffic mirroring directly from the network interface card (NIC) inside the server. Specialized NICs, such as Ubernic cards and other FPGA-accelerated or “smart” NICs, can replicate packets at ingress with extremely fine timestamp precision.

Key benefits:

  • Proximity to the application: Since mirroring occurs inside the server hosting the trading or application workload, you capture packets at the earliest possible point after they hit the host.

  • Ingress timestamping: Many of these NICs support hardware timestamping at the PHY/FPGA layer, often synchronized to PTP or GPS, with sub-nanosecond accuracy.

  • Reduced dependency on switch features: No need to configure SPAN sessions or consume switch resources.

  • Programmability: Some cards (e.g., Ubernic) allow user-defined filtering, selective replication, and telemetry insertion before the packets are exported.

Potential considerations:

  • NIC load: Care has to be taken to ensure the NIC has sufficient resources to mirror the packets. This extra processing can mean that the NIC draws more power - a factor that needs to be considered when the NIC is deployed.

  • Scalability: Each server that needs to be mirror traffic must be equipped with a capable NIC, whereas switch or tap-based methods centralize replication.

  • Topology: This approach only gives visibility into traffic entering/leaving that host, not the broader network fabric.

Traffic Mirroring using a Layer 1 Switch

A Layer 1 switch (sometimes called a “physical layer switch” or a “matrix switch”) is a hardware device that allows traffic replication by physically connecting input ports to multiple output ports at the physical layer. It can replicate signals at wire speed with very low latency impact, typically measured in mere nanoseconds or microseconds.

Key benefits:

  • Wire-speed replication with extremely low latency.

  • Centralized control over multiple links and configurations.

Using a Packet Aggregator with Traffic Mirroring

A packet aggregator (often referred to as a “network packet broker”) can take multiple monitoring inputs— whether from taps, mirrored ports, or layer 1 switches—and combine these streams into one or more aggregated data feeds. By doing this, a single analytics or monitoring device can more easily process traffic from multiple network segments.

Key functions of a packet aggregator:

  • Traffic Aggregation: Consolidates multiple ingress streams into a single output. This is particularly useful if your monitoring tool has fewer physical interfaces than the number of network links being monitored.

  • Filtering and Load Balancing: Can filter traffic to reduce bandwidth requirements (e.g., drop broadcast or irrelevant packets) or balance traffic across multiple capture/analyzer ports.

  • De-duplication: If the same traffic is seen on multiple feeds, a packet aggregator can remove duplicates.

  • High-Speed Interconnects: Some aggregators can output traffic on 10G, 40G, 100G or higher interfaces for consolidation into fewer capture ports.

When packets pass through a packet aggregator metadata can be appended to each packet. This metadata often appears in the form of a small header inserted before the original packet data.

Common types of metadata include:

  • Source Port Identification: Indicates which physical input port (or which mirrored session) the packet came from. This is vital in environments monitoring numerous ports.

  • Timestamp: A highly accurate timestamp, often derived from GPS or PTP (Precision Time Protocol). Nanosecond or microsecond precision is common in modern capture systems. This is critical for latency-sensitive applications (financial trading, for instance).

  • VLAN or MPLS Tagging: Sometimes monitoring devices add or modify VLAN tags for internal handling or path identification.

This metadata allows operators to correlate captured packets across multiple network segments and precisely reconstruct timelines for troubleshooting, compliance, or analysis.

An additional benefit of a packet aggregator switch is that typically these switches can combine ingress timestamping with deep buffers. Buffering in a packet aggregator temporarily stores packets in memory, which provides several benefits:

  1. Handling Bursts: Network traffic can be “bursty.” A buffer ensures short surges in traffic do not lead to immediate packet drops if the egress interface or the monitoring tool is temporarily saturated.

  2. Smoothing Traffic Flows: The aggregator can smooth out traffic so the downstream tool receives a more consistent data rate, avoiding overflow in the capture device’s buffers.

  3. Time Gap Preservation: Some aggregators allow for advanced timestamping while buffering so that short congestion periods do not corrupt the accuracy of timestamps.

  4. Controlled Forwarding: If the aggregator is performing advanced processing (e.g., filtering, load balancing), buffering allows the aggregator time to manage those tasks without losing data.

Because of the quality of the Napatech capture cards, many of these packet aggregation features are not required in a Beeks Analytics deployment. Whether you need a packet aggregator depends on whether you need any of these extra functions, how many ports of traffic you want to feed to Beeks Analytics, how close to the source of the packets you wish to timestamp, and whether there are other monitoring tools that you wish to feed the same traffic to.

Traffic Mirroring Timestamping

As mentioned above, packet timestamping is critical for measuring:

  • Latency (how long packets take between points)

  • Jitter / microbursts (short spikes in traffic)

  • Market data sequencing (ordering of messages in trading feeds)

  • Causality (did Event A trigger Event B?)

For low-latency trading environments, ingress timestamping is the gold standard. Ingress timestamping means that the packet is timestamped the instant it enters the device. This captures “real world” arrival time as close as possible to the wire, and minimizes uncertainty introduced by buffering, forwarding, or replication.

Beeks Analytics supports many different timestamping formats, including all common ingress timestamping formats. See the decoder list for more details.