Internet-Draft | Telcocloud Network Management | July 2025 |
Xie, et al. | Expires 8 January 2026 | [Page] |
This document describes how the various data models that are produced in the IETF can be combined in the context of Telco Cloud service delivery.¶
Specifically, this document describes the communication of a Network Orchestrator and Cloud orchestrator for the realization of optimized Telco Cloud services to implement inter-dc reachability and connectivity services.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 8 January 2026.¶
Copyright (c) 2025 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License.¶
The IETF has produced several YANG data models that are instrumental for automating the provisioning and delivery of connectivity services. An overview of these data models and a framework that describes how these various modules can be used together are described in [RFC8969].¶
This document adopts the rationale of [RFC8969], but with a focus on the network coordination with Telco Cloud services.¶
The document also identifies some gaps related to existing models.¶
The document outlines an architecture and communication process of Network Orchestrators and Cloud orchestrators.¶
This document assumes that the reader is familiar with the contents of [RFC6241], [RFC7950], and [RFC8309] as it uses terms from those RFCs.¶
This document uses the term "network model" as defined in Section 2.1 of [RFC8969].¶
In some implementation, Cloud Orchestrator and Network Orchestrator are only associated with their own respective services. That is, Cloud Orchestrator is responsible for data center (DC) services, such as application, compute, and/or storage services within the DC, while network Orchestrator plans and deploys connections based on planed inter-DC traffic demands.¶
In certain scenarios, such as the [ETSI-GS-NFV-IFA-032] cross-site connectivity service interfaces definition, to support cloud-based cross-data center (DC) scaling, the NFV cloud platform can leverage network-exposed API interfaces to dynamically collect underlay WAN network status and establish/update inter-DC connections. The architecture option is shown in the figure below.¶
This NFV cloud architecture option implies that the Cloud Orchestrator operates with network-awareness. The Network Orchestrator exposes the interfaces to provide pre-planned bandwidth and dynamic connections. The Cloud Orchestrator can then dynamically control and manage the capacity and network connections between WANs of inter-DC. As suggested in [ETSI-GR-NFV-SOL-017], The candidate IETF interfaces between Network Orchestrator and Cloud Orchestrator are outlined in the table below:¶
Number | Network Function Requirements | IETF YANG Models |
---|---|---|
1 | Multi-site Connectivity | - L3SM [RFC8299] - L3NM [RFC9182] - L2SM [RFC8466] - L2NM [RFC9291] - RFC9543 NSS YANG Model - I-D.ietf-teas-ietf-network-slice-nbi - AC Service YANG Model I-D.ietf-opsawg-teas-attachment-circuit |
2 | Capacity Management | - Service Attachment Point [RFC9408] - TE Service Mapping [I-D.ietf-teas-te-service-mapping-yang] - TE Topology [RFC8975] - TE Tunnel [I-D.ietf-teas-yang-te] - SR Policy [I-D.ietf-spring-sr-policy-yang] |
3 | Fault Management | - Alarm Management [RFC8632] |
4 | Performance Management | - Network and VPN Service PM [RFC9375] |
However, this NFVO architecture option cannot meet the emerging needs of telecom cloud applications because it is difficult to plan bandwidth between large-scale edge DCs and enterprise sites. For example, AI-based video analysis and AI-driven knowledge reasoning will be deployed in edge data centers, which are characterized by a large number of deployments and significant variations in resource capabilities. Combined with the diversity of enterprise sites, the new telecom cloud needs fast, private, and reliable connections between site-to-site, site-to-cloud, or cloud-to-cloud endpoints.¶
A potential alternative architecture option involves centralized scheduling of cloud and network resources to coordinate their integration, enabling rapid deployment of network services and applications such as SD-WAN, SASE, and other edge cloud services. This approach ensures agile allocation of cloud resources and optimization of WAN network resources to meet dynamic demand.¶
This proposed telco cloud reference architecture is an open framework to allow for multiple vendor Network Orchestrators and multiple vendor Cloud Orchestrators. The goal is to enable standard data models or APIs to provide those services.¶
The diagram below illustrates a telco cloud network example connecting multiple data centers (DCs) and enterprise CEs. Multiple data centers are shown, each potentially hosting different services or applications. For instance, DC-1 is directly linked to gateway GW2A and GW2B, suggesting it may host critical applications or services that require high availability. Various DC gateways are deployed to manage traffic flow towards Data Centers or Application instances. Two CE1 devices are connected to PE5A and PE5B respectively, indicating the start points for customer traffic entering the provider's network. There are several Provider Edge devices (PE5A, PE5B, etc.) which serve as entry and exit points for traffic moving between the customer and the provider's network. The Border routers (BRs) facilitates the transfer of data across different parts of the network. They connect the access/aggregation layer to the core network.¶
To create services across DCs like optimized service placement, generic API calls are needed. A typical usage is to use Restful APIs of Cloud Orchestrator and Network Orchestrator and used by the Super Orchestrator to provision the inter-DC services connections and also applications.¶
When deploying cross-DC cloud services, it is assumed that the Super Orchestrator has access to the DC and network connectivity topology (e.g. TE Topology [RFC8975]), as well as centralized resource information for both the DCs and the network. Some standard network inventory interfaces are available. For example, the Service Attachment Points (SAPs) [RFC9408] or ACs [I-D.ietf-opsawg-teas-attachment-circuit] can obtain the AC/Bearer information of the PE, which suggests that the private line service provisioning resource resources on the network side, but there is no cloud DC service provisioning resource information, such as CPU, GPU, storage, and DC network information. DC aware TE topology model [I-D.llc-teas-dc-aware-topo-model-04] defines YANG models about the information. One API example is as below.¶
The flowchart below gives an example of hospital applications are deployed and ensure efficient use of network connections for optimized data flow.¶
Here is a step-by-step breakdown of the flowchart with detailed explanations for each step:¶
Step 1: Requirement Analysis: This is the first step where the Super Orchestrator gathers and analyzes requirements for branch data transfer. Key Requirements are bandwidth must be at least 100 Mbps, data must be transmitted over a private connectionbe, and support AI analysis. the Super Orchestrator also determines the optimal allocation of resources for data transfer. Data will flow from the branch office to the center data center (DC). A edge DC is selected as an intermediate hop. A dedicated line of 100 Mbps of bandwidth with redundant links and to the center and edge DC is needed. Underlay Network Controllers may expose to Network Orchestrator s a set of network data models, such as the AC, L3SM [RFC8299], L3NM [RFC9182], L2SM [RFC8466], L2NM [RFC9291], RFC9543 NSS YANG Model [I-D.ietf-teas-ietf-network-slice-nbi-yang], Service Attachment Points (SAPs) [RFC9408], TE Service Mapping [I-D.ietf-teas-te-service-mapping-yang], TE Tunnel [I-D.ietf-teas-yang-te], or SR Policy [I-D.ietf-spring-sr-policy-yang]. Network Orchestrator can use these models to set up connections between the Provider Edge devices, and also customer facing ACs between CEs and PEs, DC-GWs and PEs.¶
Step 2: Application Instances Deployment and Parallel Network Connectivity: This step involves deploying AI at the center DC and the edge DC. With request from Super Orchestrator, Cloud Orchestrator allocates resources, including GPU clusters and storage. Also Network Orchestrator can configure PE.¶
Step 3: Validation and Monitoring: This step ensures that the service meets performance and reliability expectations. Through the open interfaces, Super orchestrator can monitor status of cloud resources and WAN connections. The open interfaces could be VPN PM [RFC9375] ,Alarm Management [RFC8632], or new service assurance models.¶
The steps described above assume that Super Orchestrator can access both network and cloud resources to perform resource analysis and allocation.¶
An example of service creation flow is as follows:¶
+------------------+ +---------------+ +----------------+ |Super Orchestrator| | Cloud Orchestr| |Network Orchestr| +------------------+ +---------------+ +----------------+ | | | | | | |1.1 Create a cloud service | | |------------------------------>| | | | | | +--------------------------+ | | |2.Deploy the cloud service| | | +--------------------------+ | | | | | 1.2 Create a network service | |------------------------------------------------------->| | | | | | +----------------------------+ | | |2.Deploy the network service| | | +------------------------------+ | 3.Create cloud service response | |-------------------------------| | | | | | | | | 3. Create Network service response | |--------------------------------------------------------| | | | | | | | | | | 4. Subscribe for cloud performance metric | |------------------------------>| | | | | | 5.Subscribe for network performance metric | |------------------------------------------------------->| | | | +------------------------+ | | | | | | |5.Continuous monitoring | | | +------------------------+ | | | | | | | |
A scaling example is to add.¶
None.¶
The authors wish to thank xxx and many others for their helpful comments and suggestions.¶