Internet-Draft | Framework for HP-WAN | May 2025 |
Xiong, et al. | Expires 8 November 2025 | [Page] |
This document defines a framework to enable the host-network collaboration for the high-speed and high-throughput data transmission within completion time in High Performance Wide Area Network (HP-WAN). It particularly enhances the congestion control and facilitates the functionalities for the host to collaborate with the network to perform rate negotiation, such as QoS policy, admission control, traffic scheduling and so on.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 8 November 2025.¶
Copyright (c) 2025 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License.¶
Data-intensive applications always demand high-speed data transmission over WANs such as scientific research, academia, education as discussed in [I-D.kcrh-hpwan-state-of-art] and other applications in public networks as per [I-D.yx-hpwan-uc-requirements-public-operator]. The specific requirements of HP-WANs applications mainly focus on the job-based massive data transmission over long-distance WANs within a completion time. The high and effective throughput is the fundamental requirement for HP-WAN. It is crucial to achieve high throughput while ensuring the efficient use of capacity as per [I-D.xiong-hpwan-problem-statement]. The performance will be impacted by the issues related to existing transport protocols and congestion control mechanisms such as poor convergence speed, long feedback loop and unscheduled traffic.¶
Multiple data transfer requests should be scheduled in terms of available capacity and the requested completion time in terms of transmission performance. From the routing aspect, the optimal path and resources should be scheduled based on the QoS policy for the high-speed flows to travel through the network with the negotiated rate. From the transport aspect, it ensures the reliable delivery of data with traffic scheduling and admission control to effectively handle the flow of data during transmission, reducing congestion and ensuring timely delivery of data packets. The host should consider to signal and collaborate with the network to negotiate the rates of differentiated traffic (especially when the traffic is encrypted) to avoid the congestion and optimize the overall efficiency of data transfer.¶
This document defines a framework for a protocol or signaling to enable the host-and-network collaboration for the high-speed and high-throughput data transmission within completion time in High Performance Wide Area Network (HP-WAN). It particularly enhances the congestion control and facilitates the functionalities for the host to collaborate with the network to perform rate negotiation, such as QoS policy, admission control, traffic scheduling and so on.¶
This document uses the terms defined in [I-D.kcrh-hpwan-state-of-art] and [I-D.xiong-hpwan-problem-statement]:¶
The framework is formulated to enable the host-network collaboration upon more active network involvement. The client and server could adjust the rate efficiently and rapidly with the negotiated rate-based congestion control in a fine-grained way. The network could enhance the capability to regulate the traffic and schedule the resources which could provide predictable network behaviour and mitigate incast network congestion preemptively.¶
The following diagram illustrates the functionalities between Client/Server and WAN including:¶
*Host-network collaboration signalling or protocol¶
*Active network-collaborated traffic scheduling and enforcement¶
*Negotiated rate-based congestion control algorithms¶
+-------------------------------+ | WAN | +--------+ | | +--------+ | | +----+----+ +-------------+ +----+----+ | | | Client |<------>|Edge Node|...|Transit Nodes|...|Edge Node|<------->| Server | | | +----+----+ +-------------+ +----+----+ | | +--------+ | | +--------+ *collaboration | | *collaboration signalling/protocols| | signalling/protocols +-------------------------------+ \_________/ \______________________________/ *Negotiated rate-based *Active network-collaborated congestion control scheduling and enforcement algorithms¶
The following diagram illustrates the workflows among client, server and network nodes (e.g. Edge nodes and transit nodes). The request of scheduled traffic will be signaling from client to network to negotiate rate. The traffic pattern and requirement such as completion time should be carried in the request. The acknowledgement will be signaling back from network to the client, including the response of negotiated rate and QoS policy for the client to send traffic and the fast and accurate quantitative feedback when edge node performs admission control.¶
The functions are described in the sections below including transport-related technologies such as rate negotiation, admission control, traffic scheduling and enforcement and routing-related technologies like traffic engineering, resource scheduling and load balancing.¶
+--------+ +-----------+ +------------+ +-----------+ +--------+ | Client | | Edge Node | |Transit Node| | Edge Node | | Server | +----+---+ +-----+-----+ +-----+------+ +-----+-----+ +----+---+ | | | | | |Requests(traffic pattern)| | | | |------------------------>|*Rate negotiation | | | | |*Traffic scheduling| | | |Acknowledgement |*Admission control | | | |(negotiated rate) | |*Resource scheduing | | |<------------------------|*Negotiated rate-based traffic engineering| | | |<########################################>| | | | |Traffic | | Traffic(Negotiated-rate)| Traffic(Negotiated-rate) |(Negotiated-rate) | |------------------------>|*****************************************>|------------------>| | Traffic(Wrong-rate) | | |Exceeding threshold| |------------------------>| |*Flow control |<------------------| | |*Flow control |<--------------------| | | Fast Feedback |<-------------------| | | |<------------------------| | | | V V V V V¶
In HP-WAN, the host can negotiate the sending rate with the network due to the predictability of jobs. The client communicates the traffic patterns of high-speed flows to the network to negotiate rate. The traffic patterns may cover the traffic information such as job ID, start time, completion time, data volume, traffic type and so on. The network responses the negotiated rate and QoS policy for the client to send traffic. There are three kinds of rate policy as follows:¶
*Optimal rate or optimal rate range negotiation. The network provides resource reservation for high-speed data to guarantee the transmission capacity and achieve optimal rate transmission. The client can transmit flows according to the negotiated optimal rate or optimal rate range.¶
*Minimum rate negotiation. The network provides the minimum resource guarantee. The client can transmit at a rate not less than the negotiated rate.¶
*Maximum rate negotiation. The network provides an upper limit for resource guarantee. The client can transmit at a rate not greater than the negotiated rate.¶
The network node (e.g.edge node) performs rate-based traffic scheduling and enforcement. For example, traffic classification may be needed based on the traffic type. If it needs to prioritize critical traffic for acceleration, it should upgrade the priority of QoS. And if the traffic needs a guaranteed QoS, it should provide guaranteed bandwidth for this flow. It also could perform the aggregation of mouse flows or the fragmentation of an elephant flow if needed. Splitting data across multiple paths for load balancing can increase the throughput and provide redundancy. If one path experiences congestion, alternate paths compensate, ensuring timely delivery. The traffic enforcement at network edges can used to regulate data flow to eliminate congestion and minimize the flow completion time. For example, it could enforce the rate limits based on the negotiated rate to access traffic.¶
The network node should perform admission and traffic control based on negotiated QoS and rate. By combining the admission control with congestion control, it can provide high throughput associated with completion time while efficiently using the available network capacity. The strategies of admission control are different based on the QoS policy. For example, one strategy is to immediately grant or reject admission to a reservation request on its arrival time which called as on-demand admission control. If a reservation request can not be granted or rejected at the time of its arrival, it will put that in a queue which called queue-based admission control. And a time-slot based admission control is used for scheduling the elastic and flows requests.¶
The specific elements along the path should provide active and precise flow control to mitigate network congestion to provide negotiated rate for a flow. Flow control refers to a method for ensuring the data is transmitted efficiently and reliably and controlling the rate of data transmission to prevent the fast sender from overwhelming the slow receiver and prevent packet loss in congested situations. For example, the receiver node could signal the sender node to control the traffic on or off to guarantee the packet loss. When the data sent by the client exceeds the Threshold, the network should provide fast and accurate quantitative feedback to control the traffic on or off.¶
The client should perform the improvement of congestion control algorithms based on the negotiated-rate from the network. The negotiated-rate can be viewed as an initial congestion signal to assist the client to select a suitable sending rate with the network resource scheduling acknowledgement. And it also needs to turn off/on or adjust the rate reasonably and rapidly when receiving the fast feedback from the node nearing the client.¶
The signaling from client will assist the network operator's traffic management and corresponding resource planning and scheduling. The edge node may get information (topology, bottleneck link bandwidth, queue and buffer) from a centralized controller which can also exchange information with clients and servers. The network should provide resource scheduling and reservation at nodes along the path. It will differ based on the different QoS policy. For example, the client and network can also negotiate rate based on the quota of each job. Quota is expressed as a vector of resource quantities (bandwidth,buffer,queue, etc.) at a given priority, for a time frame. The network can make dynamic bandwidth reservation upon different time frames defined by quota.¶
TBA.¶