Prosecution Insights
Last updated: April 19, 2026
Application No. 18/301,885

SLICE-DRIVEN DEPLOYMENT OF NETWORK FUNCTIONS

Final Rejection §103
Filed
Apr 17, 2023
Examiner
SAMS, MATTHEW C
Art Unit
2646
Tech Center
2600 — Communications
Assignee
Microsoft Technology Licensing, LLC
OA Round
2 (Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
3y 4m
To Grant
79%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
500 granted / 747 resolved
+4.9% vs TC avg
Moderate +12% lift
Without
With
+11.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
38 currently pending
Career history
785
Total Applications
across all art units

Statute-Specific Performance

§101
5.1%
-34.9% vs TC avg
§103
57.1%
+17.1% vs TC avg
§102
21.8%
-18.2% vs TC avg
§112
8.9%
-31.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 747 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This office action has been changed in response to the amendment filed on 10/3/2025. Claims 1, 9, 11, 12, 15-17 and 20 have been amended. Claims 8, 14 and 19 have been canceled. Claims 21-23 have been newly added. Response to Arguments Applicant’s arguments with respect to the claims have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 5-7, 9-11, 13, 15-16, 18 and 20-23 are rejected under 35 U.S.C. 103 as being unpatentable over Xing et al. (EP4,134,841-A1 hereinafter, Xing) in view of Kiess et al. (US-9,760,391 hereinafter, Kiess). Regarding claim 1, Xing teaches in a telecommunications network, a method for managing deployment of network functions based on received network slice profile information (Pages 3-4 [0006-0009] “The service profile describes a requirement that is related to a network slice and that needs to be supported by the NSI, and the slice profile describes a requirement that is related to a network slice subnet and that needs to be supported by the NSSI” and “this application provides a network resource management method. The method is performed by a resource information server, and the resource information server is deployed in a centralized manner or in a distributed manner”), the method comprising: generating a plurality of resource management profiles including instructions associated with deploying network functions (Page 13 [0139] “The performance requirement for an NSI is specifically a service profile (Service Profile), a key performance indicator (Key Performance Indicator, KPI) in the service profile, or the like. The performance requirement for an NSI is from a network slice user 208. The network slice user 208 is a user having a network slice requirement, and is specifically a user who subscribes to a network slice”), the instructions of the resource management profiles being determined based on tags that have been associated with network resources on a cloud computing system; (Pages 19-20 [0160] and Pages 20-21 [0163-0164 & 0167] “It should be understood that the NSI capability property may further include another parameter indicating an NSI capability. In addition, each parameter of the NSI capability property may alternatively be represented by using a parameter identifier.” and Page 18 [0150] “cloud platform”) receiving a deployment request, the deployment request including a network slice profile indicating one or more service requirements associated with fulfilling the deployment requests (Page 24 [0188-0190] i.e. “The performance requirement for a first network resource is used to request the first management server 101 to allocate a first network resource, and the allocation is specifically initial allocation or reallocation. The first network resource herein is specifically any one of a CSI, an NSI, an NSSI, or an NF.” and “ the performance requirement for a first network resource is from the user 107”), the one or more service requirements including a low latency requirement; (Page 20 [0163] “minimum latency”, Page 21 [0167] “uplink latency that can be further supported by the NF” and Page 17 [0135]) matching the one or more service requirements of the network slice profile included within the deployment request to a resource management profile from the plurality of resource management profiles; (Fig. 4A [S403] and Page 27 [0216-0217]) and orchestrating a deployment of network functions on the cloud computing system responsive to the deployment request and based on the instructions included within the resource management profile matching the low latency requirement form the deployment request. (Fig. 4A [S404] and Page 28 [0227-0228]) Xing differs from the claimed invention by not explicitly reciting wherein the low latency requirement from the network slice profile is satisfied based on a match to instructions from a resource management profile to achieve low latency by using greater than or equal to a threshold number of network functions of a first network function size distributed across multiple areas and orchestrating deployment of a plurality of network functions of the first network function size across multiple deployment areas of the cloud computing system to match the quality of service requirements. In an analogous art, Kiess teaches a method an apparatus for virtualizing a network entity and implementing it on one or more servers (abstract) that includes receiving request parameters for implementing a virtual network function, key performance indicators and any constraints (Fig. 4 [600]), comparing the request to capacity computation in datacenters (Fig. 4 [621] and see description regarding how computing capacity is stored and referenced in Col. 12 line 5 through Col. 13 line 50 when determining a construction plan) and orchestrating deployment of the virtual network function onto the cloud resources (Fig. 6 [S111-S120]) that includes: a low latency requirement from the network slice profile is satisfied based on a match to instructions from a resource management profile to achieve low latency by using greater than or equal to a threshold number of network functions of a first network function size distributed across multiple areas (Col. 15 lines 1-9 “request (S101) contains at least KPI (e.g. amount of bearers and summarized load) and location (either geographic, or in terms of network latency) where VNF is to be located”) and orchestrating deployment of a plurality of network functions of the first network function size across multiple deployment areas of the cloud computing system to match the quality of service requirements. (Col. 16 lines 21-22 “a deployment where multiple less powerful servers are combined”, Col. 16 line 39 through Col. 17 line 4 and encapsulated all in Col. 18 Claim 1 as well) Before the effective filing date, it would have been obvious to one of ordinary skill in the art to be motivated to implement the invention of Xing after modifying it to incorporate the ability to determine the number of network functions needed to meet a low latency requirement that also takes into account geographic dispersion of Kiess since it enables implementing devices with different capabilities in order to meet the appropriate requirements for the deployment request. (Kiess Col. 15 lines 41-46 and 57-65) Regarding claim 2, Xing in view of Kiess teaches tagging resources of the cloud computing system. (Xing Page 18 [0143 & 0149] “the NSI may be divided into a radio access network-network slice subnet instance (Radio Access Network-Network Slice Subnet Instance, RAN-NSSI), a core network-network slice subnet instance (Core Network-Network Slice Subnet Instance, CN-NSSI), and the like. Therefore, the NSMS 203 needs to request to allocate an NSSI to the new NSI. First, the performance requirement for an NSI is converted into the performance requirement for an NSSI, for example, converted into a performance requirement for a RAN-NSSI or a performance requirement for a CN-NSSI. Then, the NSMS 203 requests the NSSMS 204 to allocate the NSSI according to the performance requirement for an NSSI, to deploy the new NSI” and “ the performance requirement for an NSSI is converted into the performance requirement for an NF. For example, a performance requirement for a CN-NSSI is decomposed into performance requirements for an SMF, a UPF, and a driving control function, and a performance requirement for a RAN-NSSI is decomposed into performance requirements for a plurality of gNBs. Then, the NSSMS 204 requests the NFMS 205 to allocate the NF according to the performance requirement for an NF, to deploy the new NSSI.”) Regarding claim 5, Xing in view of Kiess teaches wherein the network slice profile includes service requirement information in accordance with 3GPP standards. (Xing Page 3 [0006]) Regarding claim 6, Xing in view of Kiess teaches wherein the matching the deployment request with the resource management profile is based on a tracking location indicated within the deployment request matching a corresponding one or more tracking locations indicated within the resource management profile. (Xing Page 16 [0132] “requirements, such as a signal coverage area” and Page 20 [0163] “coverage area that can be further supported by the NSI”) Regarding claim 7, Xing in view of Kiess teaches associating one or more preferences with a requesting entity, wherein matching the deployment request with the resource management profile is based at least in part on the one or more preferences of the requesting entity that provided the request. (Xing Page 25 [0192] “management server 101 obtains a performance requirement for the initial first network resource is specifically obtaining the performance requirement from the user 107, the resource information server 103, or a local memory”) Regarding claim 9, Xing in view of Kiess further comprising: receiving a second deployment request (Kiess Col. 9 lines 21-24), the second deployment request including a network slice profile having a high-capacity service requirement; (Xing Page 20 [0163] “uplink traffic” and “downlink traffic” and Kiess Col. 13 line 53 through Col. 14 line 3 “This message includes information about, not exclusively, the type of network function to setup, the desired capacity of the network function, login credentials from the OSS and requirements including at least geolocation data”) matching the high-capacity service requirement of the second network slice profile included within the second deployment request to a resource management profile from the plurality of resource management profiles based on the high-capacity requirement (Xing Fig. 4A [S403] and Page 27 [0216-0217] “The resource information server 103 determines, based on the available capabilities of the existing first network resources, the first network resources whose available capabilities each can meet the performance requirement, that is, the matched first network resources. Herein, "meet" means that the available capability is greater than or equal to the performance requirement” and Kiess Col. 18 lines 58-67 “determining one or more construction plans that specify an allocation of the plurality of VNF modules to actual execution units according to one of the one or more possible VNF deployment plans, requirements for networking resources linking the VNF modules, and currently available datacenter resources” i.e. matching requirement with available resources that can provide the requirement), and wherein the high-capacity requirement is satisfied based on the second resource management profile including instructions to use a single network function of a second network function size (Kiess Col. 16 lines 2-4 “single big switch that matches all fields”), the second network function size having a higher capacity than the first network function size. (Kiess Col. 16 lines 19-22 i.e. single powerful server vs. multiple less powerful servers) Regarding claim 10, Xing in view of Kiess teaches wherein the network resources on the cloud computing system comprise network functions on a core network of a fifth generation (5G) telecommunications network. (Xing Page 3 [0003-0006]) Regarding claim 11, the limitations of claim 11 are rejected as being the same reasons set forth above in claim 1. (See additionally Xing Fig. 9 [901] processor, Fig. 9 [903] memory and Page 44 [0439]) Regarding claim 13, the limitations of claim 13 are rejected as being the same reasons set forth above in claim 6. Regarding claims 15, the limitations of claim 15 are rejected as being the same reasons set forth above in claim 9. Regarding claim 16, Xing teaches in a core network implemented at least in part on an edge network of a cloud computing system, a method for managing deployment of network functions based on received network slice profile information, the method comprising: tagging resources of the core network with a plurality of tags indicating characteristics of network resources on the core network (Page 18 [0143 & 0149]), the core network being implemented across one or more edge networks of a cloud computing system; (Page 18 [0150]) generating a plurality of resource management profiles including instructions associated with deploying network functions in the core network (Page 13 [0139] “The performance requirement for an NSI is specifically a service profile (Service Profile), a key performance indicator (Key Performance Indicator, KPI) in the service profile, or the like. The performance requirement for an NSI is from a network slice user 208. The network slice user 208 is a user having a network slice requirement, and is specifically a user who subscribes to a network slice”), the instructions of the resource management profiles being determined based on the plurality of tags that have been associated with the network resources on the core network; (Pages 19-20 [0160] and Pages 20-21 [0163-0164 & 0167] “It should be understood that the NSI capability property may further include another parameter indicating an NSI capability. In addition, each parameter of the NSI capability property may alternatively be represented by using a parameter identifier.”) receiving a deployment request, the deployment request including a network slice profile indicating one or more service requirements associated with fulfilling the deployment requests (Page 24 [0188-0190]), the one or more service requirements including a low latency requirement; (Page 20 [0163] “minimum latency”, Page 21 [0167] “uplink latency that can be further supported by the NF” and Page 17 [0135]) matching the deployment request with a resource management profile from the plurality of resource management profiles; (Fig. 4A [S403] and Page 27 [0216-0217]) and orchestrating a deployment of network functions on the core network responsive to the deployment request and based on the instructions included within the resource management profile matching the low latency requirement form the deployment request. (Fig. 4A [S404] and Page 28 [0227-0228]) Xing differs from the claimed invention by not explicitly reciting wherein the low latency requirement from the network slice profile is satisfied based on a match to instructions from a resource management profile to achieve low latency by using greater than or equal to a threshold number of network functions of a first network function size distributed across multiple areas and orchestrating deployment of a plurality of network functions of the first network function size across multiple deployment areas of the cloud computing system to match the quality of service requirements. In an analogous art, Kiess teaches a method an apparatus for virtualizing a network entity and implementing it on one or more servers (abstract) that includes receiving request parameters for implementing a virtual network function, key performance indicators and any constraints (Fig. 4 [600]), comparing the request to capacity computation in datacenters (Fig. 4 [621] and see description regarding how computing capacity is stored and referenced in Col. 12 line 5 through Col. 13 line 50 when determining a construction plan) and orchestrating deployment of the virtual network function onto the cloud resources (Fig. 6 [S111-S120]) that includes: a low latency requirement from the network slice profile is satisfied based on a match to instructions from a resource management profile to achieve low latency by using greater than or equal to a threshold number of network functions of a first network function size distributed across multiple areas (Col. 15 lines 1-9 “request (S101) contains at least KPI (e.g. amount of bearers and summarized load) and location (either geographic, or in terms of network latency) where VNF is to be located”) and orchestrating deployment of a plurality of network functions of the first network function size across multiple deployment areas of the cloud computing system to match the quality of service requirements. (Col. 16 line 39 through Col. 17 line 4 and encapsulated all in Col. 18 Claim 1 as well) Before the effective filing date, it would have been obvious to one of ordinary skill in the art to be motivated to implement the invention of Xing after modifying it to incorporate the ability to determine the number of network functions needed to meet a low latency requirement that also takes into account geographic dispersion of Kiess since it enables implementing devices with different capabilities in order to meet the appropriate requirements for the deployment request. (Kiess Col. 15 lines 41-46 and 57-65) Regarding claim 18, the limitations of claim 18 are rejected as being the same reasons set forth above in claim 6. Regarding claim 20, the limitations of claim 20 are rejected as being the same reasons set forth above in claim 9. Regarding claims 21-23, Xing in view of Kiess teaches wherein orchestrating the deployment of the plurality of network functions comprises: deploying a first one or more network functions on a first edge network of the cloud computing system; (Kiess Fig. 11 [Datacenter 1] i.e. with 20 servers, Col. 16 lines 19-22 i.e. single powerful server vs. multiple less powerful servers, Col. 17 lines 54-58 i.e. location of selected server is based on appropriate geographic location to help meet QOS requirements and Col. 9 lines 21-24 i.e. reserving the resources so that requests do not conflict with each other) and deploying a second one or more network functions on a second edge network of the cloud computing system. (Kiess Fig. 11 [Datacenter 2] i.e. with 7 servers, Col. 16 lines 19-22 i.e. single powerful server vs. multiple less powerful servers, Col. 17 lines 54-58 i.e. location of selected server is based on appropriate geographic location to help meet QOS requirements and Col. 9 lines 21-24 i.e. reserving the resources so that requests do not conflict with each other) Claims 3, 4, 12 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Xing in view of Kiess as applied to claims 2, 11 and 16 above, and further in view of Priya et al. (US-2022/0141760 hereinafter, Priya). Regarding claim 3, Xing in view of Kiess teaches the limitations of claim 2 above, but differs from the claimed invention by not explicitly reciting wherein tagging resources includes tagging a new resource when onboarded to the cloud computing system. In an analogous art, Priya teaches a system and method for designing network slices and deploying them onto a communication network (Abstract) that includes tagging resources includes tagging a new resource when onboarded to the cloud computing system. (Fig. 5 and [0068-0071] and Page 3 [0043]) Before the effective filing date, it would have been obvious to one of ordinary skill in the art to be motivated to implement the invention of Xing in view of Kiess after modifying it to incorporate the ability to tag new resources when they are onboarded to the cloud computing system of Priya since it enables maintaining tags and profiles of the resources so they can be discovered and efficiently utilized. (Priya Page 3 [0048]) Regarding claim 4, Xing in view of Kiess and Priya teaches tagging resources including associating tags with currently deployed resources on the cloud computing system. (Priya Page 3 [0047] and Page 6 [0076-0077]) Regarding claims 12 and 17, the limitations of claims 12 and 17 are rejected as being the same reasons set forth above in claims 3 and 4. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US-12,381,778 to Parker et al. (same assignee and inventor) discusses reviewing deployment of network functions and verifying that they are meeting their goals along with the ability to reconfigure to meet the goals. US-2023/0136061 to Hung et al. for optimizing network function management Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW C SAMS whose telephone number is (571)272-8099. The examiner can normally be reached M-F 8:30-5 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Anderson can be reached at (571)272-4177. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Matthew C Sams/Primary Examiner, Art Unit 2646
Read full office action

Prosecution Timeline

Apr 17, 2023
Application Filed
May 30, 2025
Non-Final Rejection — §103
Aug 20, 2025
Interview Requested
Aug 26, 2025
Examiner Interview Summary
Aug 26, 2025
Applicant Interview (Telephonic)
Oct 03, 2025
Response Filed
Dec 10, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603924
ELECTRONIC DEVICE, AND METHOD FOR PROCESSING IMS-BASED CALL IN ELECTRONIC DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12587868
Systems and Methods for Proxying Real Traffic for Simulation
2y 5m to grant Granted Mar 24, 2026
Patent 12581455
REDUCED BEAM FOR PAGING
2y 5m to grant Granted Mar 17, 2026
Patent 12574762
MANAGING A NETWORK SLICE PARAMETER FOR ADMISSION CONTROL
2y 5m to grant Granted Mar 10, 2026
Patent 12568167
System and Method of Capturing, Tracking, Composing, Analyzing and Automating Analog and Digital Interactions
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
79%
With Interview (+11.9%)
3y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 747 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month