Prosecution Insights
Last updated: April 19, 2026
Application No. 18/546,739

A NON-INTRUSIVE METHOD FOR RESOURCE AND ENERGY EFFICIENT USER PLANE IMPLEMENTATIONS

Non-Final OA §102§103
Filed
Aug 16, 2023
Examiner
CHEN, ZHI
Art Unit
2196
Tech Center
2100 — Computer Architecture & Software
Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
OA Round
1 (Non-Final)
61%
Grant Probability
Moderate
1-2
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 61% of resolved cases
61%
Career Allow Rate
152 granted / 250 resolved
+5.8% vs TC avg
Strong +40% interview lift
Without
With
+40.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
27 currently pending
Career history
277
Total Applications
across all art units

Statute-Specific Performance

§101
12.7%
-27.3% vs TC avg
§103
49.1%
+9.1% vs TC avg
§102
6.9%
-33.1% vs TC avg
§112
25.2%
-14.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 250 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to the communication filed 8/16/20223. Claims 1-20 are presented for examination. Examiner Notes Examiner cites particular columns, paragraphs, figures and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirely as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, or 365(c) is acknowledged. Information Disclosure Statement The information disclosure statement (IDS) submitted on 8/16/2023. The submissions are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-2, 16-17 and 19 are rejected under 35 U.S.C. 102 (a) (1) as being anticipated by Murugesan et al. (US 20190114206 A1-IDS recorded, hereafter Murugesan). Regarding to claim 1, Murugesan discloses: A method by a network device to dynamically allocate cores to a user plane application based on a processing load of the user plane application (see [0011]-[0012] and [0024]; “dynamic binding of cores to work items within a user plane” and “The user plane can be a 5G user plane appliance”), wherein the network device includes a plurality of cores that are to be used as non-dedicated user plane cores and one or more additional cores that are to be used as non-user plane cores (see Fig. 2 and [0019] and [0021]; “A set of 8 cores is shown as feature 202 in FIG. 2. Core 0 is shown as running the scheduler and cores 1-5 are processing work item 1. Cores 6 and 7 process work item 2” and “the binding 218 has changed at the second time. Work item 1 now is assigned or bound to cores 1-3 and has thus been scaled down. Work item 2 has been scaled up from only two cores at the first time 202 to cores 4-7 at the second time 218”. According to the core bindings 202 and 218, cores 1-7 can be considered as claimed non-dedicated user plane cores since they are not dedicated to same work items and core 0 can be considered as claimed non-user plane cores since it is used to executing scheduler only, i.e., non-user plane application or process), the method comprising: determining a processing load of the user plane application, wherein the user plane application has a plurality of worker threads that are configured to poll queues for traffic to process (see [0011]; “The binding between cores and work items is dynamic and changeable to improve performance. The at least one key performance indicator can include one or more of a CPU utilization, latency and packet drops. The workload allocations can include work items that are individual schedulable functions that operation on a queue of packets within the user plane”. Also see [0019]-[0020]; “key performance indicators can include one or more of CPU utilization 208, packet drops 206, latency information, 210 as well as other performance information which might be available” and “workload such as high CPU usage during a period of time or a scheduled increase in CPU usage”); determining, based on the processing load of the user plane application, that the user plane application is to be allocated a number of cores in the plurality of cores that is different from a current number of cores allocated to the user plane application; allocating the different number of cores in the plurality of cores to the user plane application; and; executing the plurality of worker threads of the user plane application using the different number of cores in the plurality of cores instead of the current number of cores (see Fig. 2, [0019] and [0021]; “A set of 8 cores is shown as feature 202 in FIG. 2. Core 0 is shown as running the scheduler and cores 1-5 are processing work item 1. Cores 6 and 7 process work item 2” and “the binding 218 has changed at the second time. Work item 1 now is assigned or bound to cores 1-3 and has thus been scaled down. Work item 2 has been scaled up from only two cores at the first time 202 to cores 4-7 at the second time 218”. According to the core bindings 202 and 218, cores 4-5 that were previously allocated to work item 1 are now allocated to work item 22, i.e., allocating a number of non-dedicated user plane cores that are different from current cores, i.e., cores 6-7, allocated to the work item 2. Also see [0020]; “The scheduler 204 periodically monitors the key performance indicators at fixed or dynamic intervals, such as every 1 second, and decides at a certain time whether to scale up 214 or to scale down 216 work items according to the data in the configuration file 212”. According to the description of periodically monitoring the key performance indicator to decide the scale up or scale down work items, it is reasonable to conclude that the system would execute the work items 1 and 2 according to the core binding 208 discussed at [0021] after modifying the core allocation and continue to perform such periodical monitoring and scaling operations). Regarding to Claim 2, the rejection of Claim 1 is incorporated and further Murugesan discloses: wherein the different number of cores in the plurality of cores is allocated to the user plane application based on modifying core affinity settings of the plurality of worker threads (see Fig. 2, [0019]-[0021]; “Note that the binding 218 has changed at the second time. Work item 1 now is assigned or bound to cores 1-3 and has thus been scaled down. Work item 2 has been scaled up from only two cores at the first time 202 to cores 4-7 at the second time 218. FIG. 2 of course illustrates a non-limiting example of how the binding of workload to cores can be adjusted”. The binding of workload or work items to cores can be considered as claimed core affinity settings). Regarding to Claim 16, Claim 16 is a product claim corresponds to method Claim 1 and is rejected for the same reason set forth in the rejection of Claim 1 above (note: also see claim 13 and [0031] from Murugesan for the claimed “a non-transitory machine-readable medium having computer code stored therein … causes the network device to perform operations”). Regarding to Claim 17, the rejection of Claim 16 is incorporated and further Claim 17 is a product claim corresponds to method Claim 2 and is rejected for the same reason set forth in the rejection of Claim 2 above. Regarding to Claim 19, Claim 19 is a system claim corresponds to method Claim 1 and is rejected for the same reason set forth in the rejection of Claim 1 above (note: also see claim 13 and [0031] from Murugesan for the claimed “A network device … the network device comprising: a processor … a non-transitory machine-readable storage medium … causes the network device to”). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 3 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Murugesan et al. (US 20190114206 A1-IDS recorded, hereafter Murugesan) in view of Yan (US 20150223252 A1). Regarding to Claim 3, the rejection of Claim 1 is incorporated, Murugesan does not disclose: executing a thread of a non-user plane application on one of the cores in the plurality of cores that is not currently allocated to the user plane application However, Yan discloses: executing a thread of a non-user plane workload on one of the resources in the plurality of resources that is not currently allocated to the user plane workload (see [0093], [0097]-[0099]; “if the current total CP load is a high load state compared to the total processing capability, the processing capability cannot support the current load, and some UP nodes in the system need to be switched to the CP attribute”, “judging whether there is a UP node which has been powered off in the resource pool, and if so, powering on the UP node which has been powered off, and switching the attribute of the node to CP”. A powered off UP node is reasonable to be considered as a resource that is not currently allocated to the user plane (UP) application; in addition, such UP nodes can be switched to CP node, and thus such resources are reasonable to be considered as non-dedicated user plane resource. In addition, it is also reasonable to execute the CP processes/threads by such switched UP node/resources in order to solve the high CP load problem/issue). It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the core allocation for the non-user plane application from Murugesan by including a system of dynamic switching a resource between user plane attribute or dedicated and control attribute or dedicated from Yan, and thus the combination of Murugesan and Yan would disclose the missing limitations from Murugesan, since it would achieve the technical effect of effectively utilizing resources and improving the service processing capability (see [0021] from Yan; “the CP node and the UP node are allocated by means of a shared resource pool, such that dynamic switching may be implemented between the CP node and the UP node … and achieves the technical effect of effectively utilizing resources and improving the service processing capability”). Regarding to Claim 18, the rejection of Claim 16 is incorporated and further Claim 18 is a product claim corresponds to method Claim 3 and is rejected for the same reason set forth in the rejection of Claim 3 above. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Murugesan et al. (US 20190114206 A1-IDS recorded, hereafter Murugesan) in view of Baynast et al. (US 20120278811 A1, hereafter Baynast). Regarding to Claim 4, the rejection of Claim 1 is incorporated, Murugesan does not disclose: wherein a total number of worker threads in the plurality of worker threads is equal to a total number of cores in the plurality of cores. However, Baynast discloses wherein a total number of worker threads in the plurality of worker threads is equal to a total number of cores in the plurality of cores (see [0041]; “one or more processing threads. In a multi-core device, the number of threads is typically equal to the number of CPU cores dedicated to stream processing. Thread affinity can be used to ensure that a thread is dedicated to a specific CPU core”). It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the number of threads or work items of same user plane process/application from Murugesan by including settings of number of threads is equal to the number of CPU cores dedicated to a type of operation from Baynast, and thus the combination of Murugesan and Baynast would disclose the missing limitations from Murugesan, since it would provide a mechanism of effectively utilizing core resources via one-to-one allocation relationship between worker threads and CPU core resources. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Murugesan et al. (US 20190114206 A1-IDS recorded, hereafter Murugesan) in view of Pirvu (US 20120144377 A1). Regarding to Claim 5, the rejection of Claim 1 is incorporated, Murugesan does not disclose: wherein each of the plurality of worker threads is configured to yield a core that the worker thread is being executed on to another worker thread when the worker thread has occupied the core for a length of time that is longer than a threshold length of time. However, Pirvu discloses: a thread is configured to yield a core that the thread is being executed on to another thread when the thread has occupied the core for a length of time that is longer than a threshold length of time (see [0074]; “determines the percentage of time spent compiling for the current time interval 400. If this percentage starts to reach a certain target threshold, the interleaving controller lowers the priority of the compilation thread. If the compilation budget is exceeded, the interleaving controller can direct the compilation thread to yield the processor to computation threads”). It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the executions of user plane threads or work items on allocated cores from Murugesan by including an operation budget time threshold to yield allocated processor resource for a thread from Pirvu, and thus the combination of Murugesan and Pirvu would disclose the missing limitations from Murugesan, since it would provide a mechanism of avoiding spending too much time on a thread/task via interleaving scheduling threads on processor resource (see [0103] from Pirvu; “these methods to better interleave compilation with computation by not allowing the virtual machine to spend more than a certain threshold amount of processor resources on compilation tasks”). Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Murugesan et al. (US 20190114206 A1-IDS recorded, hereafter Murugesan) in view of Shieh et al. (US 20170052827 A1, hereafter Shieh). Regarding to Claim 6, the rejection of Claim 5 is incorporated, Murugesan does not disclose: instructing the plurality of worker threads not to yield cores that the plurality of worker threads are being executed on in response to a determination that all of the plurality of cores are allocated to the user plane application. However, Shieh discloses: instructing the plurality of worker threads not to yield cores that the plurality of worker threads are being executed on in response to a determination that all of the plurality of cores are allocated to the user plane application (see [0021]; “Most multi-core systems used in conventional physical environments have a packet-receiving queue for each processing core” and “the scheduling assumes all process cores are fully allocated to the driver regarding processing loading. In this case, the resources of the processing core are utilized to pull packets from queue and process packet forwarding (which requires a real-time response), without yielding the processing cores to other tasks or virtual machines”). It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the executions of user plane threads or work items on allocated cores from Murugesan by including a particular embodiment of utilizing all of core to pull packets without yielding cores to other tasks from Shieh, and thus the combination of Murugesan and Shieh would disclose the missing limitations from Murugesan, since it would provide a specific embodiment of allocating all of cores to perform user plane process/function of polling packets from queue to complete such user plane process/function as soon as possible without allocating cores to other tasks (see [0021] from Shieh). Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Murugesan et al. (US 20190114206 A1-IDS recorded, hereafter Murugesan) in view of Liu et al. (CN 111580949 A-English translation provided by Google Patents, hereafter Liu). Regarding to Claim 7, the rejection of Claim 1 is incorporated, Murugesan does not disclose: wherein each of plurality of worker threads is configured to go to sleep when the worker thread determines that there is no processing work to be performed by the worker thread. However, Liu discloses: wherein each of plurality of worker threads is configured to go to sleep when the worker thread determines that there is no processing work to be performed by the worker thread (see [0010]; “an intermittent polling condition parameter T is set, where the intermittent polling condition parameter T is a time interval at which the packet receiving processing thread needs to actively wait for sleep after processing all the messages in the network card receiving queue each time”. Also see [0003]-[0004] for the packet receiving processing thread discussed at [0010] are reasonable to considered as claimed worker threads of user-plane application). It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the executions of user plane threads or work items from Murugesan by including setting a sleep time for user plane threads from Liu, and thus the combination of Murugesan and Liu would disclose the missing limitations from Murugesan, since it would provide automatic adjustment method of the network packet receiving mode (see [0006] from Liu). Claim 8 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Murugesan et al. (US 20190114206 A1-IDS recorded, hereafter Murugesan) in view of Liu et al. (CN 111580949 A-English translation provided by Google Patents, hereafter Liu) and further in view of Ajmera et al. (US 20220012108 A1, hereafter Ajmera). Regarding to Claim 8, the rejection of Claim 7 is incorporated, the combination of Murugesan and Liu does not disclose: wherein the processing load of the user plane application is determined based on processor usage times of the plurality of worker threads. However, Ajmera discloses: wherein the processing load of the user plane application is determined based on processor usage times of the plurality of worker threads (see [0037]; “polling threads 145 having more active workloads (e.g., more active processing cycles) are placed on higher processing capacity processing cores 125 while polling threads 145 having lower active workloads (e.g., less active processing cycles and more idle processing cycles) are placed on lower processing capacity processing cores 125. That is, polling threads 145 are dynamically assigned to processing cores 125 during execution in a manner that takes into account their active workloads (e.g., the number active processing cycles used by the polling threads 145)”. See [0002] for the polling threads discussed at [0037] can be considered as worker threads of user plane application). It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the key performance indicators that causes dynamic core allocation for user plane threads/work items from the combination of Murugesan and Liu by including using active processing cycle to cause dynamic core allocation for user plane threads from Ajmera, and thus the combination of Murugesan, Liu and Ajmera would disclose the missing limitations from the combination of Murugesan and Liu, since it would provide a more reasonable key performance indicator to perform dynamic resource allocation for user plane type of workload (see [0032] from Ajmera; “The processing cycles used by a polling thread 145 of the software-based switching program 140 can be divided into two parts: (1) active processing cycles and (2) idle processing cycles … the total number of processing cycles used by a polling thread 145 can be seen as the sum of the number of active processing cycles and the number of idle processing cycles. It should be noted that while the polling thread is not performing any “useful” work during the idle processing cycles, it is still using up processing cycles of the processing core 125 (to perform polling)”). Regarding to Claim 20, the rejection of Claim 19 is incorporated and further Claim 20 is a system claim corresponds to method Claims 7 and 8 and is rejected for the same reasons set forth in the rejections of Claim 7-8 above. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Murugesan et al. (US 20190114206 A1-IDS recorded, hereafter Murugesan) in view of Liu et al. (CN 111580949 A-English translation provided by Google Patents, hereafter Liu) and further in view of Yang et al. (CN 109213577 A-English translation provided by Google Patents, hereafter Yang). Regarding to Claim 9, the rejection of Claim 7 is incorporated, the combination of Murugesan and Liu does not disclose: wherein each of the plurality of worker threads is configured to determine a length of time that the worker thread is to go to sleep based on a sleep history of the worker thread. However, Yang discloses: wherein each of the plurality of worker threads is configured to determine a length of time that the worker thread is to go to sleep based on a sleep history of the worker thread (see [0006]-[0011]; “storing the acquired microsecond time of the historical sleep time of each thread into each time item of an array”, “acquiring the first microsecond time of the historical sleep time corresponding to the ID; determining the total microsecond time of the current thread needing to sleep by utilizing the preset second microsecond time of the sleep time of the current thread at the current time and the first microsecond time; … calling a sleep function to enable the current thread to sleep for M millisecond if the millisecond time M is the positive number”). It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the operations of the user plane threads/work items from the combination of Murugesan and Liu by including placing a thread for sleeping for a period of time based on historical sleep information of the thread from Yang, and thus the combination of Murugesan, Liu and Yang would disclose the missing limitations from the combination of Murugesan and Liu, since it would provide a precise sleeping duration for a executing thread (see [0001]-[0004]; “The invention belongs to the technical field of communication … the precision of the function is not enough, and in some cases, when the function is called to make the program sleep for one millisecond … the invention provides a method, an apparatus and a computer device for thread sleep, so as to solve the above problems in the prior art”). Claims 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over Murugesan et al. (US 20190114206 A1-IDS recorded, hereafter Murugesan) in view of Ajmera et al. (US 20220012108 A1, hereafter Ajmera). Regarding to Claim 10, the rejection of Claim 1 is incorporated, Murugesan does not disclose: wherein each of the plurality of worker threads is configured to keep track of a length of time during which the worker thread performs processing work and to report the length of time. However, Ajmera discloses: wherein each of the plurality of worker threads is configured to keep track of a length of time during which the worker thread performs processing work and to report the length of time (see [0032]-[0033]; “active processing cycles are processing cycles during which the polling thread 145 is processing (or otherwise working with) a packet” and “measures/collects the number of active processing cycles used by a given polling thread 145”. Also see [0037]; “polling threads 145 are dynamically assigned to processing cores 125 during execution in a manner that takes into account their active workloads (e.g., the number active processing cycles used by the polling threads 145)”. See [0002] for the polling threads discussed at [0032]-[0033] and [0037] can be considered as worker threads of user plane application). It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the key performance indicators that causes dynamic core allocation for user plane threads/work items from Murugesan by including using active processing cycle to cause dynamic core allocation for user plane threads from Ajmera, and thus the combination of Murugesan and Ajmera would disclose the missing limitations from Murugesan, since it would provide a more reasonable key performance indicator to perform dynamic resource allocation for user plane type of workload (see [0032] from Ajmera; “The processing cycles used by a polling thread 145 of the software-based switching program 140 can be divided into two parts: (1) active processing cycles and (2) idle processing cycles … the total number of processing cycles used by a polling thread 145 can be seen as the sum of the number of active processing cycles and the number of idle processing cycles. It should be noted that while the polling thread is not performing any “useful” work during the idle processing cycles, it is still using up processing cycles of the processing core 125 (to perform polling)”). Regarding to Claim 11, the rejection of Claim 10 is incorporated and further the combination of Murugesan and Ajmera discloses: wherein the processing load of the user plane application is determined based on the lengths of times reported by the plurality of worker threads (see [0037] from Ajmera; “polling threads 145 having more active workloads (e.g., more active processing cycles) are placed on higher processing capacity processing cores 125 while polling threads 145 having lower active workloads (e.g., less active processing cycles and more idle processing cycles) are placed on lower processing capacity processing cores 125. That is, polling threads 145 are dynamically assigned to processing cores 125 during execution in a manner that takes into account their active workloads (e.g., the number active processing cycles used by the polling threads 145)”). Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Murugesan et al. (US 20190114206 A1-IDS recorded, hereafter Murugesan) in view of Bedekar et al. (US 20150366000 A1, hereafter Bedekar). Regarding to Claim 12, the rejection of Claim 1 is incorporated, Murugesan does not disclose: wherein the processing load of the user plane application is determined based on queue depth measurements of queues used by the user plane application. However, Bedekar discloses: wherein the processing load of the user plane process is determined based on queue depth measurements of queues used by the user plane process (see [0045]; “the number of instances can also be changed dynamically over time so as to ensure appropriate processing capacity. Such a dynamic change in the number of instances can be referred to as elastic scaling, and may help to dynamically adapt the processing capacity to the needs of the workload. Elastic scaling can be applied for functional entities performing per-user operations (either control plane or user plane)” “a measure of the load experienced by the functional entity may be monitored. For example, the measure of load may be the packets processed per unit time, or the bit-rate of the incoming traffic, or the number of users, or the number of packets waiting in queues, or the average processor utilization, or the like”). It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the key performance indicators that causes dynamic core allocation for user plane threads/work items from Murugesan by including using number of packets waiting in queues to cause dynamic resource allocations for user plane function from Bedekar, and thus the combination of Murugesan and Bedekar disclose would disclose the missing limitations from Murugesan, since the packets at the queues are the actual object or work to be performed by the processes of the user-plane function (see [0038] from Bedekar; “An example of a per-UE user plane function may be the Packet Data Convergence Protocol (PDCP) operation. This function can operate on one packet at a time to perform ciphering and header compression”). Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Murugesan et al. (US 20190114206 A1-IDS recorded, hereafter Murugesan) in view of Stawitz et al. (US 20120124518 A1, hereafter Stawitz). Regarding to Claim 13, the rejection of Claim 1 is incorporated, Murugesan does not disclose: wherein all of the plurality of worker threads are assigned a same scheduling priority, and wherein the plurality of worker threads are scheduled for execution using a first-in-first-out scheduling policy However, Stawitz discloses: wherein all of the plurality of operations are assigned a same scheduling priority, and wherein the plurality of operation are scheduled for execution using a first-in-first-out scheduling policy (see [0035]-[0037]; “an additional operation that is of the same type or is associated with the same file or application as the first operation may be automatically assigned the same priority level” and placed in the queue 304 in a position that corresponds to that particular priority level” and “The operations 108 that are scheduled for performance … operations may be added to the queue 304 based on a first in first out (FIFO) method”). It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the executions of the user plane threads/work items from Murugesan by including process of assigning priority to tasks/operations based on task/operation type or associated with same application from Stawitz, and thus the combination of Murugesan and Stawitz would disclose the missing limitations from Murugesan, since it would provide a mechanism of executing same type of tasks or tasks from same application in a close time slots via assigning such tasks a same priority level (see [0035]-[0037] from Stawitz). Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Murugesan et al. (US 20190114206 A1-IDS recorded, hereafter Murugesan) in view of Attalla et al. (US 20140089917 A1, hereafter Attalla). Regarding to Claim 14, the rejection of Claim 1 is incorporated, Murugesan does not disclose: wherein the number of cores in the plurality of cores that are allocated to the user plane application is increased more quickly than it is decreased. However, Attalla discloses: wherein the number of cores in the plurality of cores that are allocated to the [user plane] application is increased more quickly than it is decreased (see [0020]; “increment and decrement parameters, which specify the amount by which resource 212 can be scaled when a threshold is satisfied. For example, the increment parameter can be two processors, and the decrement parameter can be one processor, such that virtual machine 208 can be scaled up quickly and scaled down slowly”). It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the dynamic core allocation from Murugesan by setting increment and decrement parameters for the resources to be dynamically allocated from Attalla, and thus the combination of Murugesan and Attalla would disclose the missing limitations from Murugesan, since it would provide a mechanism of specifying a number of processors or cores to be modified each time during the dynamic allocation process to limit number of processors or cores to be modified (see [0020] from Attalla). Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Murugesan et al. (US 20190114206 A1-IDS recorded, hereafter Murugesan) in view of Zou et al. (WO 2011020264 A1-English translation provided by Google Patent) and Beveridge (US 20140173213 A1). Regarding to Claim 15, the rejection of Claim 1 is incorporated, Murugesan does not disclose: executing one or more threads of an upgraded version of the user plane application using one or more cores in the plurality of cores that are not currently allocated to the user plane application; redirecting network traffic from the user plane application to the upgraded version of the user plane application; terminating the one or more threads of the user plane application after the network traffic is redirected; and allowing all of the plurality of cores to be allocated to the upgraded version of the user plane application after the user plane application is terminated. However, Zou discloses: a method of providing in-service upgrade of an application comprising: executing one or more threads of an upgraded version of the [user plane] application using one or more cores in the plurality of cores that are not currently allocated to the [user plane] application (see lines 20-6 of pages 2-3; “In the process that the existing network version software continues to run to provide external business services, The image is upgraded; After the mirroring is successfully upgraded, the image of the upgraded version is activated … Before the image of the upgrade is successfully activated, the method further includes the following steps: The upgraded image is run concurrently with the software of the current network version, but the image does not provide service services externally”. The concurrent running both of upgraded version and current version would require executing threads of the upgraded version using cores that are not currently allocated to the current version); redirecting network traffic from the [user plane] application to the upgraded version of the [user plane] application (see lines 2-6 of page 3; “the image of the upgraded version is activated, and the service is provided to the external network” and “but the image does not provide service services externally, and the system is verified to be correct and reliable after the image is upgraded. When the image verification is passed, the upgraded image is activated”. Using the upgrade version to provide service to the external system instead of using current/existing version to provide service would require redirecting the network traffic from the current/existing version to the upgrade version); terminating the one or more threads of the [user plane] application after the network traffic is redirected (see lines 2-6, 17-21 of page 3; “the service is replaced by the software of the current network version to provide external service services. At the same time, stop the running of the current version of the software”); and It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the executions of user plane function from Murugesan by including in-service upgrade of software application from Zou, since it would provide a method of reducing “the cost of the upgrade process to increase the satisfaction of the user” during the upgrade process (see lines 17-20 of page 2 from Zou). In addittion, Beveridge discloses: re-allocating cores associated with terminated application to the remaining application after the application is terminated (see [0018]; “suspend the VMs that may still be running to support the inactive remote desktops so that the hardware resources of the VMs, upon suspension, can be freed for reallocation”. Also see [0004] for the hardware resources discussed at [0018] can include CPU/cores resources). It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the in-service upgrade feature for the user-plane application/function from the combination of Murugesan and Zou by including freeing resources of suspending application for reallocation from Beveridge, and thus the combination of Murugesan, Zou and Beveridge would disclose the missing limitations from Murugesan (note: after combining the feature of Beveridge into the combination of Murugesan and Zou, the new combination system would free the cores allocated to the outdated user plane application for reallocation after such outdated user plane application is terminated, and thus to allow all of the cores, i.e., the cores already allocated to the upgraded version of the user plane application and the cores previously allocated to the outdated user plane application, are possible to be allocated to the updated version of the user plane application in certain reasonable embodiment), since it would provide a mechanism of recycling resources that was allocated to suspended application to avoid resource waste when the resource is no longer used by the suspended application (see [0018] from Beveridge). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Murugesan et al. (US 20190097939 A1) discloses: a number of threads (packet processing cores) for user plane functions are assigned to a corresponding number of transmit queues for transmission of packets on a network interface (see [0031]) and key performance indicators (e.g., load indicators) used to dynamically assign cores (threads) to transmit queues (see [0029]). Finlayson et al. (US 20090043631 A1) discloses: determining, by the dynamic router, a worker availability status of all workers on the assembly line according to an availability of each worker and a depth of a wait queue of work packets destined for said each worker (see [0320]). Pusukuri et al. (US 20140208330 A1) discloses: each thread in a thread group is assigned the same priority in order to neutralize priority among threads of the same group. Specifically, assigning each thread in the thread group the same priority ensures that threads in a group are executed in first-in-first-out fashion with respect to the order in which they become runnable (see [0035]-[0036]). Papakipos et al. (US 20080005547 A1) discloses: there is a possibility that a first thread may produce work or allocate resources at a rate faster than a second thread that consumes the work or de-allocate the resources (see [0085]). Bayoumi et al. (US 20180131979 A1) discloses: Resource provisioning policies of the EM follow the “scale up early and scale down slowly” principle (see [0060]). Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZHI CHEN whose telephone number is (571)272-0805. The examiner can normally be reached on M-F from 9:30AM to 5:30PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, April Y Blair can be reached on 571-270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR to authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /Zhi Chen/ Patent Examiner, AU2196 /APRIL Y BLAIR/Supervisory Patent Examiner, Art Unit 2196
Read full office action

Prosecution Timeline

Aug 16, 2023
Application Filed
Jan 08, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596561
SYSTEM AND METHOD OF DYNAMICALLY ASSIGNING DEVICE TIERS BASED ON APPLICATION
2y 5m to grant Granted Apr 07, 2026
Patent 12596584
APPLICATION PROGRAMING INTERFACE TO INDICATE CONCURRENT WIRELESS CELL CAPABILITY
2y 5m to grant Granted Apr 07, 2026
Patent 12591461
ADAPTIVE SCHEDULING WITH DYNAMIC PARTITION-LOAD BALANCING FOR FAST PARTITION COMPILATION
2y 5m to grant Granted Mar 31, 2026
Patent 12585495
DISTRIBUTED COMPUTING PIPELINE PROCESSING
2y 5m to grant Granted Mar 24, 2026
Patent 12579012
FORWARD PROGRESS GUARANTEE USING SINGLE-LEVEL SYNCHRONIZATION AT INDIVIDUAL THREAD GRANULARITY
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
61%
Grant Probability
99%
With Interview (+40.5%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 250 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month