Prosecution Insights
Last updated: April 19, 2026
Application No. 18/891,876

MANAGEMENT OF REDUNDANT LINKS

Non-Final OA §103§DP
Filed
Sep 20, 2024
Examiner
ALSHACK, OSMAN M
Art Unit
2112
Tech Center
2100 — Computer Architecture & Software
Assignee
BOOST SUBSCRIBERCO L.L.C.
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
445 granted / 517 resolved
+31.1% vs TC avg
Moderate +14% lift
Without
With
+14.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
33 currently pending
Career history
550
Total Applications
across all art units

Statute-Specific Performance

§101
13.0%
-27.0% vs TC avg
§103
44.7%
+4.7% vs TC avg
§102
7.3%
-32.7% vs TC avg
§112
22.5%
-17.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 517 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims 2. Claims 1-20 is presented for examination. Abstract 3. The abstract of the disclosure is acceptable for examination purposes. Oath Declaration 4. The Oath complies with all the requirements set forth in MPEP 602 and therefore is accepted. Drawings 5. The drawings received on 09/20/2024 are acceptable for examination purposes. Information Disclosure Statement 6. The references listed in the information disclosure statement (IDS) submitted on 09/20/2024 have been considered. The submission is compliance with the provisions of 37 CFR 1.97. Form PTO- 1449 is signed and attached hereto. Specification 7. The specification is objected to because: In paragraph [0002] of the specification stats that the application claims the benefit of and priority to U.S. Application No. 63/331,643, filed Apr. 15, 2022, and to U.S. application Ser. No. 17/974,977, filed Oct. 27, 2022, but does not provide the current status of the application 17/974,977 (i.e., now U.S. Patent No. 12,126,455 B2). Claim Objections 8. Claims 1, 9, and 17 are objected to because of the following informalities: The claims recite “in response to detecting that a failure rate of the first data center layer exceeds a threshold failure rate: remove the first redundant link---.” The claims are silent regarding to the feature of “detecting a failure rate.” In other words, no failure rate has been detected in order to make determination that the failure rate of the first data center layer whether exceeds a threshold failure rate or not. Appropriate clarification is required. Double Patenting 9. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. 10. Claims 1, 2, and 9 are non-provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 3, and 13 of patent application No: US 12126455 B2 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because claims 1, 2, and 9 of the present application are substantially equivalent to claims 1, 13, and 13 of the reference application as shown in the chart and explanation below. Instant Application No. 18/91,876 U.S. Patent No. US 12,126,455 B2 Claim 1. A system, comprising: at least one memory configured to store computer instructions; and one or more processors configured to execute the computer instructions to: establish a data center hierarchy of a cellular network, wherein the data center hierarchy includes a first data center layer, a second data center layer, and a third data center layer; establish a first redundant link between a first node in the first data center layer and a third node in the third data center layer; and in response to detecting that a failure rate of the first data center layer exceeds a threshold failure rate: remove the first redundant link; and add a second redundant link between the third node and a second node in the second data center layer. Claim 1. A system, comprising: one or more processors configured to: establish a data center hierarchy of a cellular network, wherein the data center hierarchy includes a first data center layer arranged between a second data center layer and a third data center layer: acquire a first failure rate corresponding with a first set of machines residing within the first data center layer of the data center hierarchy, the first data center layer includes a first router having a first redundant link between the first router and a third router residing within the third data center layer of the data center hierarchy; detect that the first failure rate has exceeded a threshold failure rate; identify a second set of machines residing within the second data center layer of the data center hierarchy in response to detection that the first failure rate has exceeded the threshold failure rate, the second data center layer includes a second router; remove the first redundant link between the third router residing within the third data center layer and the first router in response to detection that the first failure rate has exceeded the threshold failure rate; and add a second redundant link between the third router residing within the third data center layer and the second router residing within the second data center layer. Claim 2: The system of claim 1, wherein the one or more processors establish the data center hierarchy of the cellular network by being configured to further execute the computer instructions to: arrange the first data center layer between the second data center layer and the third data center layer. Claim 3: The system of claim 1, wherein: the first data center layer is arranged between the third data center layer and the second data center layer. Claim 9: A method, comprising: establishing a data center hierarchy of a cellular network, wherein the data center hierarchy includes a first data center layer, a second data center layer, and a third data center layer; establishing a first redundant link between a first node in the first data center layer and a third node in the third data center layer; and in response to detecting that a failure rate of the first data center layer exceeds a threshold failure rate: removing the first redundant link between the first node in the first data center layer and the third node in the third data center layer; and adding a second redundant link between the third node in the third data center layer and a second node in the second data center layer. Claim 13: A method, comprising: determining a first failure rate corresponding with a first set of machines residing within a first data center layer that includes a first router, the first data center layer corresponds with a local data center of a data center hierarchy of a cellular network; detecting that the first failure rate has exceeded a threshold failure rate; identifying a second set of machines residing within a second data center layer based on the threshold failure rate in response to detection that the first failure rate has exceeded the threshold failure rate, the second data center layer the second data center layer corresponds with an edge data center and includes a second router; removing a first redundant link between a third router residing within a third data center layer and the first router in response to detection that the first failure rate has exceeded the threshold failure rate, the third data center layer corresponds with a cell site; and adding a second redundant link between the third router residing within the third data center layer and the second router. From the table above, claims 1, 3, and 13 of the reference application contain every limitation of claims 1, 2, and 9 the instant application except the feature of “at least one memory configured to store computer instructions” in claim 1. However, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to modify the instance application with the teachings of reference application by including at least one memory to store computer instructions. This modification would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, because one of ordinary skill in the art would have recognized the at least one memory to store computer instructions would have improved the communication system performance. Thus, claims 1, 2, and 9 of the instant application are not patentably distinct over the patent application because both applications contain substantially the same limitations performing the same function. This is a non-provisional nonstatutory double patenting rejection because the patentably indistinct claims have in fact been patented. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 11. Claims 1-20 are rejected under 35 U.S.C. 103 (a) as being unpatentable over Iglesias et al. (US 11,095,543 B1) "herein after as Iglesias " in view of Latif, Ammar at al. (Telco Meets AWS Cloud: Deploying DISH’s 5G Network in AWSCloud; AWS for Industries; 27 FEB 2022) “herein after as Latif.) As per claim 1: Iglesias substantially teaches or discloses a system, comprising: at least one memory configured to store computer instructions; and one or more processors configured to execute the computer instructions to (see column 21, lines 19-23, herein Memory 1230 may include any type of dynamic storage device that may store information and instructions for execution by processor 1220, and/or any type of non-volatile storage device that may store information for use by processor 1220): establish a data center hierarchy of a cellular network, wherein the data center hierarchy includes a first data center layer, a second data center layer, and a third data center layer (see column 2, lines 58-61, herein classification models 103 may be used to classify VNFs or sets of VNFs at one or more data centers, in order to determine failover conditions associated with the VNFs, and Fig. 1); and in response to detecting that a failure rate of the first data center layer exceeds a threshold failure rate (see column 2, lines 51-56, herein RAS 101 may identify failover conditions (e.g., based on KPIs associated with one or more VNFs) and effect a failover of a set of VNFs (e.g., including the one or more VNFs and/or other VNFs), in order to maintain high availability and performance of the VNFs; column 9, lines 22-29; and column 10, lines 49-64): remove the first redundant link; and add a second redundant link between the third node and a second node in the second data center layer (see column 9, lines 55-63, herein RAS 101 may further deactivate (at 310) VNF_1 at data center 203-1, and propagate the failover to one or more network elements. For example, RAS 101 may instruct a controller, hypervisor, etc. of data center 203-1 to de-provision, deactivate, etc. the previously active instance 205-1 of VNF_1. RAS 101 may, for example, cause one or more routing tables, hostnames, or the like associated with data center 203-1 to be updated to reflect the failed over instance 305 of VNF_1, and Figs. 2-6 [Examiner notes: since VNF moved to a second data center layer the first redundant link at the router will be removed, obviously in the routing table, since the changes have been propagated to the router as well]). Iglesias does not explicitly teach establish a first redundant link between a first node in the first data center layer and a third node in the third data center layer. However, Latif in the same the field of endeavor teaches establish a first redundant link between a first node in the first data center layer and a third node in the third data center layer (see page 2, In telco-grade networks, resiliency is at the heart of design. It’s vital to maintain the targeted service-level agreements (SLAs), comply with regulatory requirements and support seamless failover of services, and page 5, AWS Direct Connect is leveraged to provide connectivity between DISH’s RAN network and the AWS Cloud. Each Local Zone is connected over 2*100G Direct Connect links for redundancy). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to modify the system of Iglesias with the teachings of Latif by establishing a first redundant link between a first node in the first data center layer and a third node in the third data center layer. This modification would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, because one of ordinary skill in the art would have recognized the establishing a first redundant link between a first node in the first data center layer and a third node in the third data center layer would have improved the system performance. As per claim 2: Iglesias teaches that wherein the one or more processors establish the data center hierarchy of the cellular network by being configured to further execute the computer instructions to: arrange the first data center layer between the second data center layer and the third data center layer (see column 2, lines 58-61, herein classification models 103 may be used to classify VNFs or sets of VNFs at one or more data centers, in order to determine failover conditions associated with the VNFs, and Figs. 2-6). As per claim 3: Iglesias teaches that wherein the one or more processors establish the data center hierarchy of the cellular network by being configured to further execute the computer instructions to: define a first plurality of nodes in the first data center layer to be logically positioned between a second plurality of nodes in the second data center layer and a third plurality of nodes in the third data center layer (see column 7, lines 6-11, herein data centers 203-2 and 203-3 are illustrated in FIG. 2 as blank boxes, to signify that VNFs 205-1 through 205-4 are implemented by data center 203-1 in this scenario. In practice, data centers 203-2 and/or 203-3 may implement one or more other VNFs, and/or other instances of VNFs 205-1, 205-2, 205-3, and/or 205-4, and Figs. 2-6). As per claim 4: Iglesias teaches that wherein the one or more processors establish the data center hierarchy of the cellular network by being configured to further execute the computer instructions to: establish the third data center layer within a cell site of the cellular network; establish the first data center layer within a local data center of the cellular network; and establish the second data center layer within a passthrough edge data center of the cellular network (see column 4, lines 25-46, herein Data center configuration information 109 may indicate configurations of one or more data centers at which particular VNFs (e.g., VNFs associated with KPI source information 107) are implemented. The configuration information may include, for example, types and/or quantities of VNFs installed at particular data centers. For example, first data center configuration information 109 for a first data center may indicate that an Access and Mobility Management Function (“AMF”), User Plane Function (“UPF”), Session Management Function (“SMF”), and Unified Data Management function (“UDM”) associated with a first network slice are implemented at the first data center. Further, second data center configuration information 109 for a different second data center may indicate that an AMF, UPF, SMF, and UDM associated with a different second network slice are implemented at the second data center. Further, third data center configuration information 109 for a different third data center may indicate that one or more elements associated with an Internet Protocol (“IP”) Multimedia Subsystem (“IMS”) core, such as a Proxy CSCF (“P-CSCF”), Serving CSCF (“S-CSCF”), I-CSCF, Home Subscriber Server (“HSS”), and TAS are implemented at the third data center). As per claim 5: Iglesias teaches that wherein the one or more processors remove the first redundant link between the first node in the first data center layer and the third node in the third data center layer prior to adding the second redundant link between the third node in the third data center layer and the second node in the second data center layer (see column 9, lines 55-63, herein RAS 101 may further deactivate (at 310) VNF_1 at data center 203-1, and propagate the failover to one or more network elements. For example, RAS 101 may instruct a controller, hypervisor, etc. of data center 203-1 to de-provision, deactivate, etc. the previously active instance 205-1 of VNF_1. RAS 101 may, for example, cause one or more routing tables, hostnames, or the like associated with data center 203-1 to be updated to reflect the failed over instance 305 of VNF_1, and Figs. 2-6 [Examiner notes: since VNF moved to a second data center layer the first redundant link at the router will be removed, obviously in the routing table, since the changes have been propagated to the router as well]). As per claim 6: Iglesias teaches that wherein the one or more processors are configured to further execute the computer instructions to: identify the second node as an end point for the second redundant link in response to detecting that a second failure rate of the second data center layer does not exceed the threshold failure rate (see column 10 lines 65-67 & column 11, lines 1-5, although not shown in this figure, RAS 101 may determine that such maximum threshold latency may be satisfied (e.g., not exceeded) if VNF_1 is implemented at one data center 203 and VNF_2 and VNF_3 are implemented at another data center 203. In such an occurrence, RAS 101 may select these two different data centers 203 to implement VNF_1, VNF_2, and VNF_3 in the manner outlined above). As per claim 7: Iglesias teaches that wherein the one or more processors are configured to further execute the computer instructions to: determine a number of nodes in the first data center layer that have failed over a period of time (see column 2, lines 49-56,herein RAS 101 may receive, generate, and/or refine (at 102) one or more sets of models, and correlations between the models, based on which RAS 101 may identify failover conditions (e.g., based on KPIs associated with one or more VNFs) and effect a failover of a set of VNFs (e.g., including the one or more VNFs and/or other VNFs), in order to maintain high availability and performance of the VNFs); and detect that the failure rate of the first data center layer exceeds the threshold failure rate when the number of nodes in the first data center layer that have failed exceeds a threshold number (see column 10, lines 49-59, herein failover dependencies/constraint information 115, associated with remediation model 113, may indicate that VNF_1, VNF_2, and VNF_3 should be implemented by the same data center 203. Additionally, or alternatively, inter-function interfaces/SLA information 111 may indicate that SLAs associated with VNF_1, VNF_2, and VNF_3 indicate that these VNFs 205 should be implemented by the same data center 203. For example, inter-function interfaces/SLA information 111 may indicate a maximum threshold latency of communications between VNF_1, VNF_2, and VNF_3. RAS 101 may determine or receive performance metrics information that indicates that a latency of communications between different data centers 203 exceeds the maximum threshold latency, based on which RAS 101 may determine that VNF_1, VNF_2, and VNF_3 should be implemented by the same data center). As per claim 8: Iglesias teaches that wherein the one or more processors are configured to further execute the computer instructions to: determine a number of nodes in the first data center layer that are non-responsive (see column 2, lines 38-41, herein the failover may also include de-instantiating, deprovisioning, deactivating, etc. the VNF from the data center that originally implemented, hosted, executed, etc. the VNF (e.g., immediately prior to the failover)); and detect that the failure rate of the first data center layer exceeds the threshold failure rate when the number of nodes in the first data center layer that are non-responsive exceeds a threshold number (see column 2, lines 58-67 & column 3, lines1-3 herein classification models 103 may be used to classify VNFs or sets of VNFs at one or more data centers, in order to determine failover conditions associated with the VNFs. As referred to herein, a “failover condition” may refer to a condition, set of conditions, criteria, or the like, that indicate that a VNF should be failed over from one data center to another. Such failover conditions may include, for example, threshold values associated with one or more particular KPIs or metrics, such as a maximum latency threshold, a minimum throughput threshold, a maximum call failure rate threshold, a minimum call success rate threshold, and/or other suitable types of values, metrics, KPIs, etc). As per claim 9: Iglesias substantially teaches or discloses a method, comprising: establishing a data center hierarchy of a cellular network, wherein the data center hierarchy includes a first data center layer, a second data center layer, and a third data center layer (see column 2, lines 58-61, herein classification models 103 may be used to classify VNFs or sets of VNFs at one or more data centers, in order to determine failover conditions associated with the VNFs, and Fig. 1); and in response to detecting that a failure rate of the first data center layer exceeds a threshold failure rate (see column 2, lines 51-56, herein RAS 101 may identify failover conditions (e.g., based on KPIs associated with one or more VNFs) and effect a failover of a set of VNFs (e.g., including the one or more VNFs and/or other VNFs), in order to maintain high availability and performance of the VNFs; column 9, lines 22-29; and column 10, lines 49-64): removing the first redundant link between the first node in the first data center layer and the third node in the third data center layer; and adding a second redundant link between the third node in the third data center layer and a second node in the second data center layer (see column 9, lines 55-63, herein RAS 101 may further deactivate (at 310) VNF_1 at data center 203-1, and propagate the failover to one or more network elements. For example, RAS 101 may instruct a controller, hypervisor, etc. of data center 203-1 to de-provision, deactivate, etc. the previously active instance 205-1 of VNF_1. RAS 101 may, for example, cause one or more routing tables, hostnames, or the like associated with data center 203-1 to be updated to reflect the failed over instance 305 of VNF_1, and Figs. 2-6 [Examiner notes: since VNF moved to a second data center layer the first redundant link at the router will be removed, obviously in the routing table, since the changes have been propagated to the router as well]). Iglesias does not explicitly teach establishing a first redundant link between a first node in the first data center layer and a third node in the third data center layer. However, Latif in the same the field of endeavor teaches establishing a first redundant link between a first node in the first data center layer and a third node in the third data center layer (see page 2, In telco-grade networks, resiliency is at the heart of design. It’s vital to maintain the targeted service-level agreements (SLAs), comply with regulatory requirements and support seamless failover of services, and page 5, AWS Direct Connect is leveraged to provide connectivity between DISH’s RAN network and the AWS Cloud. Each Local Zone is connected over 2*100G Direct Connect links for redundancy). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to modify the system of Iglesias with the teachings of Latif by establishing a first redundant link between a first node in the first data center layer and a third node in the third data center layer. This modification would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, because one of ordinary skill in the art would have recognized the establishing a first redundant link between a first node in the first data center layer and a third node in the third data center layer would have improved the system performance. As per claim 10: Iglesias teaches that wherein establishing the data center hierarchy of the cellular network comprises: assigning the first data center layer as being arranged between the second data center layer and the third data center layer (see column 2, lines 58-61, herein classification models 103 may be used to classify VNFs or sets of VNFs at one or more data centers, in order to determine failover conditions associated with the VNFs, and Figs. 2-6). As per claim 11: Iglesias teaches that wherein establishing the data center hierarchy of the cellular network comprises: defining a first plurality of nodes in the first data center layer to be logically positioned between a second plurality of nodes in the second data center layer and a third plurality of nodes in the third data center layer (see column 7, lines 6-11, herein data centers 203-2 and 203-3 are illustrated in FIG. 2 as blank boxes, to signify that VNFs 205-1 through 205-4 are implemented by data center 203-1 in this scenario. In practice, data centers 203-2 and/or 203-3 may implement one or more other VNFs, and/or other instances of VNFs 205-1, 205-2, 205-3, and/or 205-4, and Figs. 2-6). As per claim 12: Iglesias teaches that wherein establishing the data center hierarchy of the cellular network comprises: running the third data center layer within a cell site of the cellular network; running the first data center layer within a local data center of the cellular network; and running the second data center layer within a passthrough edge data center of the cellular network (see column 4, lines 25-46, herein Data center configuration information 109 may indicate configurations of one or more data centers at which particular VNFs (e.g., VNFs associated with KPI source information 107) are implemented. The configuration information may include, for example, types and/or quantities of VNFs installed at particular data centers. For example, first data center configuration information 109 for a first data center may indicate that an Access and Mobility Management Function (“AMF”), User Plane Function (“UPF”), Session Management Function (“SMF”), and Unified Data Management function (“UDM”) associated with a first network slice are implemented at the first data center. Further, second data center configuration information 109 for a different second data center may indicate that an AMF, UPF, SMF, and UDM associated with a different second network slice are implemented at the second data center. Further, third data center configuration information 109 for a different third data center may indicate that one or more elements associated with an Internet Protocol (“IP”) Multimedia Subsystem (“IMS”) core, such as a Proxy CSCF (“P-CSCF”), Serving CSCF (“S-CSCF”), I-CSCF, Home Subscriber Server (“HSS”), and TAS are implemented at the third data center). As per claim 13: Iglesias teaches that wherein removing the first redundant link occurs prior to adding the second redundant link (see column 9, lines 56-64, herein RAS 101 may instruct a controller, hypervisor, etc. of data center 203-1 to de-provision, deactivate, etc. the previously active instance 205-1 of VNF_1. RAS 101 may, for example, cause one or more routing tables, hostnames, or the like associated with data center 203-1 to be updated to reflect the failed over instance 305 of VNF_1). As per claim 14: Iglesias teaches that selecting the second node as an end point for the second redundant link in response to detecting that a second failure rate of the second data center layer does not exceed the threshold failure rate (see column 10 lines 65-67 & column 11, lines 1-5, although not shown in this figure, RAS 101 may determine that such maximum threshold latency may be satisfied (e.g., not exceeded) if VNF_1 is implemented at one data center 203 and VNF_2 and VNF_3 are implemented at another data center 203. In such an occurrence, RAS 101 may select these two different data centers 203 to implement VNF_1, VNF_2, and VNF_3 in the manner outlined above). As per claim 15: Iglesias teaches that determining a number of nodes in the first data center layer that have failed over a period of time (see column 2, lines 49-56,herein RAS 101 may receive, generate, and/or refine (at 102) one or more sets of models, and correlations between the models, based on which RAS 101 may identify failover conditions (e.g., based on KPIs associated with one or more VNFs) and effect a failover of a set of VNFs (e.g., including the one or more VNFs and/or other VNFs), in order to maintain high availability and performance of the VNFs); and detecting that the failure rate of the first data center layer exceeds the threshold failure rate when the number of nodes in the first data center layer that have failed exceeds a threshold number (see column 10, lines 49-59, herein failover dependencies/constraint information 115, associated with remediation model 113, may indicate that VNF_1, VNF_2, and VNF_3 should be implemented by the same data center 203. Additionally, or alternatively, inter-function interfaces/SLA information 111 may indicate that SLAs associated with VNF_1, VNF_2, and VNF_3 indicate that these VNFs 205 should be implemented by the same data center 203. For example, inter-function interfaces/SLA information 111 may indicate a maximum threshold latency of communications between VNF_1, VNF_2, and VNF_3. RAS 101 may determine or receive performance metrics information that indicates that a latency of communications between different data centers 203 exceeds the maximum threshold latency, based on which RAS 101 may determine that VNF_1, VNF_2, and VNF_3 should be implemented by the same data center). As per claim 16: Iglesias teaches that determining a number of nodes in the first data center layer that are non-responsive (see column 2, lines 38-41, herein the failover may also include de-instantiating, deprovisioning, deactivating, etc. the VNF from the data center that originally implemented, hosted, executed, etc. the VNF (e.g., immediately prior to the failover)); and detecting that the failure rate of the first data center layer exceeds the threshold failure rate when the number of nodes in the first data center layer that are non-responsive exceeds a threshold number(see column 2, lines 58-67 & column 3, lines1-3 herein classification models 103 may be used to classify VNFs or sets of VNFs at one or more data centers, in order to determine failover conditions associated with the VNFs. As referred to herein, a “failover condition” may refer to a condition, set of conditions, criteria, or the like, that indicate that a VNF should be failed over from one data center to another. Such failover conditions may include, for example, threshold values associated with one or more particular KPIs or metrics, such as a maximum latency threshold, a minimum throughput threshold, a maximum call failure rate threshold, a minimum call success rate threshold, and/or other suitable types of values, metrics, KPIs, etc). As per claim 17: Iglesias substantially teaches or discloses a cellular network, comprising: a data center hierarchy, including: a first data center layer having a first plurality of nodes; a second data center layer having a second plurality of nodes; and a third data center layer having a third plurality of nodes (see column 2, lines 58-61, herein classification models 103 may be used to classify VNFs or sets of VNFs at one or more data centers, in order to determine failover conditions associated with the VNFs, and Fig. 1), wherein the first data center layer is logically arranged between the second data center layer and the third data center layer (see Figs. 2-6); and a network core configured to; and in response to detecting that a failure rate of the first data center layer exceeds a threshold failure rate (see column 2, lines 51-56, herein RAS 101 may identify failover conditions (e.g., based on KPIs associated with one or more VNFs) and effect a failover of a set of VNFs (e.g., including the one or more VNFs and/or other VNFs), in order to maintain high availability and performance of the VNFs; column 9, lines 22-29; and column 10, lines 49-64): remove the first redundant link; and add a second redundant link between the third node and a second node of the second plurality of nodes in the second data center layer (see column 9, lines 55-63, herein RAS 101 may further deactivate (at 310) VNF_1 at data center 203-1, and propagate the failover to one or more network elements. For example, RAS 101 may instruct a controller, hypervisor, etc. of data center 203-1 to de-provision, deactivate, etc. the previously active instance 205-1 of VNF_1. RAS 101 may, for example, cause one or more routing tables, hostnames, or the like associated with data center 203-1 to be updated to reflect the failed over instance 305 of VNF_1, and Figs. 2-6 [Examiner notes: since VNF moved to a second data center layer the first redundant link at the router will be removed, obviously in the routing table, since the changes have been propagated to the router as well]). Iglesias does not explicitly teach establish a first redundant link between a first node of the first plurality of nodes in the first data center layer and a third node of the third plurality of nodes in the third data center layer. However, Latif in the same the field of endeavor teaches establish a first redundant link between a first node of the first plurality of nodes in the first data center layer and a third node of the third plurality of nodes in the third data center layer (see page 2, In telco-grade networks, resiliency is at the heart of design. It’s vital to maintain the targeted service-level agreements (SLAs), comply with regulatory requirements and support seamless failover of services, and page 5, AWS Direct Connect is leveraged to provide connectivity between DISH’s RAN network and the AWS Cloud. Each Local Zone is connected over 2*100G Direct Connect links for redundancy). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, to modify the system of Iglesias with the teachings of Latif by establishing a first redundant link between a first node of the first plurality of nodes in the first data center layer and a third node of the third plurality of nodes in the third data center layer. This modification would have been obvious to one of ordinary skill in the art, before the effective filing date of the invention, because one of ordinary skill in the art would have recognized the establishing a first redundant link between a first node of the first plurality of nodes in the first data center layer and a third node of the third plurality of nodes in the third data center layer would have improved the system performance. As per claim 18: Iglesias teaches that the third data center layer is within a cell site; the first data center layer is within a local data center; and the second data center layer is within a passthrough edge data center (see column 4, lines 25-46, herein Data center configuration information 109 may indicate configurations of one or more data centers at which particular VNFs (e.g., VNFs associated with KPI source information 107) are implemented. The configuration information may include, for example, types and/or quantities of VNFs installed at particular data centers. For example, first data center configuration information 109 for a first data center may indicate that an Access and Mobility Management Function (“AMF”), User Plane Function (“UPF”), Session Management Function (“SMF”), and Unified Data Management function (“UDM”) associated with a first network slice are implemented at the first data center. Further, second data center configuration information 109 for a different second data center may indicate that an AMF, UPF, SMF, and UDM associated with a different second network slice are implemented at the second data center. Further, third data center configuration information 109 for a different third data center may indicate that one or more elements associated with an Internet Protocol (“IP”) Multimedia Subsystem (“IMS”) core, such as a Proxy CSCF (“P-CSCF”), Serving CSCF (“S-CSCF”), I-CSCF, Home Subscriber Server (“HSS”), and TAS are implemented at the third data center). As per claim 19: Iglesias teaches that wherein the network core is further configured to: determine a number of nodes of the first plurality of nodes in the first data center layer that have failed over a period of time (see column 2, lines 49-56,herein RAS 101 may receive, generate, and/or refine (at 102) one or more sets of models, and correlations between the models, based on which RAS 101 may identify failover conditions (e.g., based on KPIs associated with one or more VNFs) and effect a failover of a set of VNFs (e.g., including the one or more VNFs and/or other VNFs), in order to maintain high availability and performance of the VNFs); and detect that the failure rate of the first data center layer exceeds the threshold failure rate when the number of nodes in the first data center layer that have failed exceeds a threshold number (see column 10, lines 49-59, herein failover dependencies/constraint information 115, associated with remediation model 113, may indicate that VNF_1, VNF_2, and VNF_3 should be implemented by the same data center 203. Additionally, or alternatively, inter-function interfaces/SLA information 111 may indicate that SLAs associated with VNF_1, VNF_2, and VNF_3 indicate that these VNFs 205 should be implemented by the same data center 203. For example, inter-function interfaces/SLA information 111 may indicate a maximum threshold latency of communications between VNF_1, VNF_2, and VNF_3. RAS 101 may determine or receive performance metrics information that indicates that a latency of communications between different data centers 203 exceeds the maximum threshold latency, based on which RAS 101 may determine that VNF_1, VNF_2, and VNF_3 should be implemented by the same data center). As per claim 20: Iglesias teaches that wherein the network core is further configured to: determine a number of the first plurality of nodes in the first data center layer that are non-responsive (see column 2, lines 38-41, herein the failover may also include de-instantiating, deprovisioning, deactivating, etc. the VNF from the data center that originally implemented, hosted, executed, etc. the VNF (e.g., immediately prior to the failover)); and detect that the failure rate of the first data center layer exceeds the threshold failure rate when the number of nodes in the first data center layer that are non-responsive exceeds a threshold number (see column 2, lines 58-67 & column 3, lines1-3 herein classification models 103 may be used to classify VNFs or sets of VNFs at one or more data centers, in order to determine failover conditions associated with the VNFs. As referred to herein, a “failover condition” may refer to a condition, set of conditions, criteria, or the like, that indicate that a VNF should be failed over from one data center to another. Such failover conditions may include, for example, threshold values associated with one or more particular KPIs or metrics, such as a maximum latency threshold, a minimum throughput threshold, a maximum call failure rate threshold, a minimum call success rate threshold, and/or other suitable types of values, metrics, KPIs, etc). Examiner Notes 12. When amending the claims, applicants are respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention. Prior Art 13. The prior art of record, considered pertinent to the applicant’s disclosure, is listed in the attached PTO-892 form. Conclusion 14. Any inquiry concerning this communication or earlier communications from the examiner should be directed to OSMAN ALSHACK whose telephone number is (571)272-2069. The examiner can normally be reached on MON-FRI 8:30 AM-5:00 PM EST, also please fax interview request to (571) 273- 2069. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ALBERT DECADY can be reached on 5712723819. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /OSMAN M ALSHACK/Examiner, Art Unit 2112
Read full office action

Prosecution Timeline

Sep 20, 2024
Application Filed
Feb 06, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591482
SECURITY CONTROL METHOD AND APPARATUS FOR INTEGRATED CIRCUIT, STORAGE MEDIUM, AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12591801
NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM STORING SIMULATION PROGRAM, SIMULATION METHOD, AND INFORMATION PROCESSING DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12580682
ROLLBACK FOR COMMUNICATION LINK ERROR RECOVERY IN EMULATION
2y 5m to grant Granted Mar 17, 2026
Patent 12572838
METHOD OF RECOVERING QUANTUM ERROR INDUCED BY NON-MARKOVIAN NOISE
2y 5m to grant Granted Mar 10, 2026
Patent 12554575
DATA PROCESSING METHOD AND APPARATUS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
99%
With Interview (+14.4%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 517 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month