🎨The Forgotten Art of Far Side Signaling in Ethernet🥅


The migration from Time-Division Multiplexing (TDM) to Ethernet was a revolutionary leap in networking. Ethernet brought scalability, cost-efficiency, and flexibility to the forefront. However, in this transition, an essential feature of TDM was lost: far side signaling. This loss has introduced challenges in modern networking, particularly in path determination and fault resolution.
What Is Far Side Signaling?
In traditional TDM networks, far side signaling allowed devices at both ends of a connection to exchange status information directly. For example:
A failed circuit could notify its counterpart to stop sending data, preventing unnecessary retransmissions.
Circuit status (e.g., up/down, congested, degraded) could be conveyed instantaneously, allowing for near real-time decision-making.
This capability was built into the physical layer and operated independently of higher-level protocols, enabling swift and reliable fault detection and response.
Ethernet’s Shortcoming | The Loss of Far Side Signaling
When networks transitioned to Ethernet, far side signaling became a casualty of the shift. Ethernet's design prioritised simplicity and scalability but lacked robust mechanisms for fault signaling at the physical layer. Instead:
Faults like link failures or degradation are often detected by higher-level protocols such as OSPF or BGP.
These routing protocols take time to converge and may rely on secondary metrics like heartbeat signals or packet loss to infer link status.
This introduces delays that can significantly impact performance, especially in time-sensitive applications like financial transactions, VoIP, or industrial control systems.
Why Relying on Higher-Level Protocols Is Problematic
1. Latency in Fault Detection
Higher-level protocols are inherently slower because they require:
Detection of link failure (e.g., BGP session timeouts).
Recalculation of routes based on updated topology.
This delay can range from milliseconds to several seconds, an eternity for critical applications.
2. Packet Loss Before Detection
Without immediate signaling, packets continue to flow toward failed or degraded links until higher-layer protocols detect the problem. This results in retransmissions and degraded network performance.
3. Inefficient Path Selection
Higher-level protocols often lack real-time visibility into link quality metrics like jitter, packet loss, or latency, leading to suboptimal routing decisions.
Alternatives to Mitigate the Loss
1. Carrier Ethernet
Carrier Ethernet introduces mechanisms like Link Layer Discovery Protocol (LLDP) and Operations, Administration, and Maintenance (OAM), which partially restore far side signaling capabilities. For instance:
Ethernet OAM (802.3ah) enables fault detection and remote diagnostics at the physical and data link layers.
Y.1731 OAM provides performance monitoring metrics like delay and jitter, which are critical for SLA adherence.
Carrier Ethernet is particularly effective for service providers, where maintaining high reliability and performance is essential.
2. Bidirectional Forwarding Detection (BFD)
BFD operates independently of routing protocols and provides near real-time detection of faults. It works by sending rapid heartbeat messages between endpoints. However, BFD requires additional configuration and is not natively part of Ethernet.
3. Software-Defined Networking (SDN)
SDN decouples the control plane from the data plane, allowing for centralised monitoring and dynamic reconfiguration. While SDN does not reintroduce far side signaling at the Ethernet layer, it provides a more holistic solution for rapid fault detection and recovery.
4. Segment Routing & Path Monitoring
Modern routing frameworks like Segment Routing (SR) enhance path visibility and allow for more granular control of packet flows, partially compensating for Ethernet's shortcomings.
The Case for SD-WAN in Mitigating These Challenges
SD-WAN, particularly solutions like Fusion’s SD-WAN, addresses many of the issues introduced by Ethernet’s lack of far side signaling.
Encrypted Overlays: Fusion creates secure, redundant tunnels that can dynamically reroute traffic in response to link degradation.
Proactive Monitoring: Fusion’s packet loss mitigation and WAN optimisation features ensure link health is continuously monitored at a granular level.
Instant Failover: Fusion doesn’t wait for higher-level protocols to detect faults. It uses built-in mechanisms to detect and mitigate issues in milliseconds.
For businesses dependent on Ethernet or broadband links, SD-WAN bridges the gap, restoring the reliability once provided by TDM’s far side signaling.
Wrap
The shift to Ethernet brought incredible advantages, but at the cost of losing far side signaling. While Ethernet was not designed with this capability, modern alternatives like Carrier Ethernet and SD-WAN have evolved to fill the void. For businesses seeking a robust, real-time solution, Fusion SD-WAN stands out as the ideal choice, offering the reliability of TDM with the flexibility of modern networking.
In networking, speed and reliability are everything. Don’t let the loss of far side signaling slow you down—choose a solution built for the demands of today’s interconnected world.
Subscribe to my newsletter
Read articles from Ronald Bartels directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Ronald Bartels
Ronald Bartels
Driving SD-WAN Adoption in South Africa