Prosecution Insights
Last updated: April 19, 2026
Application No. 18/695,232

DEVICE AND METHOD FOR SCALING MICROSERVICES

Non-Final OA §103
Filed
Mar 25, 2024
Examiner
SHITAYEWOLDETSADI, BERHANU
Art Unit
2455
Tech Center
2400 — Computer Networks
Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
OA Round
3 (Non-Final)
84%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
318 granted / 377 resolved
+26.4% vs TC avg
Strong +24% interview lift
Without
With
+24.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
16 currently pending
Career history
393
Total Applications
across all art units

Statute-Specific Performance

§101
10.1%
-29.9% vs TC avg
§103
61.8%
+21.8% vs TC avg
§102
6.5%
-33.5% vs TC avg
§112
8.2%
-31.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 377 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/04/2026 has been entered. Claim status Claims 1, 9 an d17 have been amended. Claim 18 has been canceled. Claims 1-17 presented for the examination and remain pending in the application. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 3, 9, 11, 17 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Intelligent Autoscaling of Microservices in the Cloud for Real-Time Applications (i.e., 1, 2021, accepted February 13, 2021, date of publication February 24, 2021, date of current version March 5, 2021). (i.e., note that the Examiner cited pages are from the actual pages from 35464-35476 of this NPL). ABEER ABDEL KHALEQ, (Member, IEEE) (i.e., Applicant provided NPL resource), (hereinafter ABEER) in view of Doshi et al. U.S. Pub. No. 2021/0117249 A1, (hereinafter Doshi) and further in view of Huizenga U.S. Pub. No. 2008/0034370 A1, (hereinafter Huizenga). Regarding claim 1. ABEER teaches a method for scaling microservices in a service mesh (note that here the term “workload and job (i.e., traffic)” have been interpreted in light of the specification on Page. 5, lines 17-22 in equal meanings and also the term “service mesh” refers to “traffic”. ABEER teaches in the [Abstract] developing a cloud application based on a microservice architecture imposes different challenges including scalability at the container level. What adds to the challenge is that cloud applications impose quality of service (QoS) requirements and have various resource demands requiring a customized scaling approach...), the method comprising: obtaining information representing a workload of a microservice chain, wherein the workload comprises at least one job (ABEER teaches [On. Page. 35465 in the right column under the title “A. Microservices Autoscaling and QoS, lines 24-29] investigated the effect of absolute versus relative metrics in microservices autoscaling. They suggested that for CPU intensive workloads (i.e., note that the workload indicates a traffic which is the claimed at least one job), absolute metrics such as CPU utilization enable more accurate decisions than the relative metrics used by the default Kubernetes autoscaling algorithm and further ABEER teaches [On. Page. 35470 in the left column bellow Fig. 5 “Machine learning autoscaling modules” there are description of the loop steps from 1-7, lines 3-26] the microservice resource requirement determined and the system runs a vertical scaling on the microservice or pod to get the initial recommended settings for the microservice or pod CPU and memory request…, determine the resource usage consumption based on the information and a new pod will be identified for resource type scaling. Also, this step will help in vertical autoscaling to adjust the resource request if needed…(note that “microservice chain is not defined” is not defined in the claims. Under the broadest reasonable interpretation, ABEER’s microservices is a chain (see page 35468, figure 1.)); obtaining information representing current and historical resource allocations of the service mesh, determining a reward, wherein the reward is indicative of completed jobs and allocated resources of the service mesh (ABEER teaches [On. Page 35470 in the left column bellow Fig. 5 “Machine learning autoscaling modules” there are description of the loop steps from 1-7 in step 4, lines 9-16] run a query to identify the microservice or pod trend consumption of the ratio of consumed versus requested resources. This step can benefit from historical machine learning for an incoming pod based on its trend in resource usage a new pod will be identified for resource type scaling (i.e., note that here the term “new” indicates the “current” resource allocation and the historical machine learning for the trend usage resource indicates the historical resource allocation). Also, this step will help in vertical autoscaling to adjust the resource request if needed and further ABEER teaches [On. Page. 35470 in the right column below the “Input: Current Environment class (Env), current Agent Action (Act) after the end of number 28” in lines 5-11] the scaled pods are deployed,…The RL agent will receive a reward based on the current pods’ response time…); running a reinforcement learning (RL) model on the information representing the workload, current and historical resource allocations, reward, and (ABEER teaches [On. Page 35470 in the left column bellow Fig. 5 “Machine learning autoscaling modules” in step 5, lines 19-37] run the HPA (i.e., Kubernetes auto-scaler) based on identified resource from step 4., and 6. Collect pods and HPA logs from Stack driver and run the horizontal RL (i.e., Reinforcement Learning) agent to identify threshold values for the maximum number of pods and resource utilization that will minimize response time. 7. Continue from step 1 as long as the pods are deployed. Here is the algorithm used in the step function of the RL (i.e., Reinforcement Learning) agent Environment: The algorithm represents the autoscaling environment where the RL agent (i.e., model) will get the observation to perform the need action… (i.e., note that here the RL agent (i.e., model) get the observation which indicates the model information representing the workload) and further teaches [On. Page. 35470 in the right column below the “Input: Current Environment class (Env), current Agent Action (Act) after the end of number 28” in lines 1-13] where new pods is the new number of pods to deploy for the autoscaling…The RL (i.e., Reinforcement Learning) agent will receive a reward based on the current pods’ response time aiming at maximizing the reward as long as the response time is less than or equal the QoS value…); and obtaining a further resource allocation for the workload as an output of the RL model (ABEER teaches [On. Page 35470 in the left column bellow Fig. 5 “Machine learning autoscaling modules” in step 7, lines 25-35] the algorithm represents the autoscaling environment where the RL agent (i.e., RL model) will get the observation to perform the needed action…, and further ABEER teaches [On. Page. 35470 in the right column below the “Input: Current Environment class (Env), current Agent Action (Act) after the end of number 28” in lines 1-13] where new pods is the new number of pods to deploy for the autoscaling…The RL agent will receive a reward based on the current pods’ response time aiming at maximizing the reward as long as the response time is less than or equal the QoS value…). While, ABEER teaches about the obtaining of information representing a workload of the microservices above, ABEER does not explicitly teach obtaining the microservice chain comprises a path of two or more microservices through the service mesh for performing the workload; producing a feedback signal, wherein the feedback signal is indicative of a delay for increasing the resource allocation of the service mesh, and running feedback signal. However, Doshi teaches obtaining the microservice chain comprises a path of two or more microservices through the service mesh for performing the workload (Doshi teaches in Para. [0225] about the microservices elements or devices in a data path with proper keys can be avoided, but can be utilized An IPU can negotiate security with a device in a chain or path of devices. After security through a path of devices is accomplished, a data path to devices in the path can be trusted and Doshi further teaches in Para. [0177] for example, secure resource manager 1902 can run a service mesh to decide what resource is to execute workload); producing a feedback signal, wherein the feedback signal is indicative of a delay for increasing the resource allocation of the service mesh (Doshi teaches in Para. [0316] telemetry, tracing, monitoring and logging and secure resource management can provide completion, status, logs, and events to its IPU (i.e., infrastructure processing unit) local control plane. At (14), local events, Logs, Status Changes can cause a feedback loop (i.e., producing a feedback signal) for dynamic adjustments to maintain Policies…, and Doshi teaches in Para. [0338] an example is for RPC offload, the IPU can timestamp each individual RPC session for latency/jitter (i.e., indicative of a delay) and associate errors and associated events. These can feed back into the IPU control plane for dynamic adjustments and further, Doshi teaches in Para. [0188] managing availability and scalability. Various embodiments of IPU's SDN 2006 can implement service management such as Service Mesh load balancing policies,... The IPU can also manage local resource allocations (i.e., the IPU here maintains and adjusts the mesh load balancing to manage (i.e., increase) the local resource allocation)); and running feedback signal (Doshi teaches in Para. [0316] local events, Logs, Status Changes can cause a feedback loop (i.e., producing a feedback signal) for dynamic adjustments to maintain Policies). Therefore, ABEER and Doshi are analogues arts, and they are in the same field of endeavor as they both are directed to the process of resource allocation and providing a feedback signal in a cloud system to provide a better services. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of causing a feedback loop for dynamic adjustment to maintain policies and indicate latency/jitter (i.e., delay) ([0316] and [0338]) as taught, by Doshi into ABEER invention. One would have been motivated to do so in order to the system improves total cost of ownership, reduce application and network latency, reduce network backhaul traffic and associated energy consumption, improve service capabilities, and improve compliance with security or data privacy requirements. Further, ABEER in view of Doshi does not explicitly teach wherein the delay is based at least in part on a boot time for allocating and configuring the increased resources. However, Huizenga teaches wherein the delay is based at least in part on a boot time for allocating and configuring the increased resources (Huizenga teaches in Para. [0006] delays in resource allocation may impact the performance and response times of the application making the request and further, Huizenga teaches in Para. [0038] that the initial policy might give the application a very large reserve resource allocation that allows the application to grow rapidly during initialization to expedite system start up. The workload management system 4 would then revert to a steady-state policy that limits the reserve resource allocation to a much smaller quantity). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of resource allocation delay and response time to allow the application to grow rapidly based on the allocation of resources ([0006] and [0038]) into the teachings of ABEER in view of Doshi invention. One would have been motivated to do so in order to improve an application workload management by avoiding resource sharing, thus sharing the data processing resource without penalizing applications, and providing an incremental increase in the guaranteed minimum resource allocation, and hence increasing the share of the resource, and effectively managing application work-loads in the data processing system using the notion of the dynamically maintainable reserve resource allocation of the data processing resource. Regarding claim 3. ABEER teaches wherein the current resource allocation of the service mesh and the workload of the microservice chain are represented by a state of the RL model (ABEER teaches [On. Page 35470 in the left column bellow Fig. 5 “Machine learning autoscaling modules” in step 7, lines 25-35] the algorithm represents the autoscaling environment where the RL agent (i.e., RL model) will get the observation to perform the needed action…,see page 35465, col. 1 where auto scale microservices in the cloud based on their demands and QoS…auto scale the workload with minimum user interaction based on the workload log data (i.e., service mesh). (Note that “microservice chain is not defined” is not defined in the claims. Under the broadest reasonable interpretation, ABEER’s microservices is a chain (see page 35468, figure 1.)). Regarding claims 9 and 17. Claims 9 and 17 incorporate substantively all the limitation of claim 1 in a device, a computer program product and a computer readable storage form and are rejected under the same rationale. Furthermore, regarding the limitations of device and readable storage, the prior art of record Bahl teaches in Para. [0028], [0030] and [0126]. Regarding claim 11. Claim 11 incorporates substantively all the limitation of claim 3 in a device form and is rejected under the same rationale. Furthermore, regarding the limitations of device and readable storage, the prior art of record Bahl teaches in Para. [0028] and [0030]. Claims 2 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over ABEER in view of Doshi further in view of Huizenga and further in view of HANSEL: Adaptive horizontal scaling of microservices using Bi-LSTM Ming et al. (i.e., Applicant provided NPL resource). Received in revised form 27 January 2021. Accepted 14 February 2021. Available online 5 March 2021, (hereinafter Ming). Regarding claim 2. ABEER in view of Doshi further in view of Huizenga teaches the method according to claim 1. ABEER in view of Doshi further in view of Huizenga does not explicitly teach wherein a resource allocation of the service mesh comprises a queue of jobs for at least one microservice and a number of instances for running the at least one microservice. However, Ming teaches wherein a resource allocation of the service mesh comprises a queue of jobs for at least one microservice and a number of instances for running the at least one microservice (Ming teaches [On. Page. 5 under the title “5. Design of microservice elastic scaling and (5.1)”, lines 1-14] the elastic scaling…, the operation of elastic scaling consists of two parts: 10 executing…, 2) determining number of microservices…, there are two approach is to periodically monitor events (such as CPU, workloads, queues, and so on) and perform elastic operations on resources based on thresholds. The active approach is based on predicting workload ahead of time from past workloads to scale horizontally (i.e., note that workloads, queues make up the service mesh)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of microservice elastic scaling to monitor resource, workload and queues ([On. Page. 5 under the title “5. Design of microservice elastic scaling and (5.1)”, lines 1-14]) as taught, by Ming into the teachings of ABEER in view of Doshi further in view of Huizenga invention. One would have been motivated to do so in order to monitor and determine the resource allocation and the workload in an efficient manner. Regarding claim 10. Claim 10 incorporates substantively all the limitation of claim 2 in a device form and is rejected under the same rationale. Furthermore, regarding the limitations of device and readable storage, the prior art of record Bahl teaches in Para. [0028] and [0030]. Claims 4, 5,12 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over ABEER in view of Doshi further in view of Huizenga and further in view of Di Balsamo et al. U.S. Pub. No. 2015/0277987 A1, (hereinafter Di). Regarding claim 4. ABEER in view of Doshi further in view of Huizenga teaches the method according to claim 1. ABEER in view of Doshi further in view of Huizenga does not explicitly teach wherein a job is processed by at least one microservice and has an associated deadline. However, Di teaches wherein a job is processed by at least one microservice and has an associated deadline (Di teaches in Para. [0057] resource provisioning provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment in SLA (i.e., not that here providing a resource in a cloud computing environment is the claimed “at least one microservice)…, and further Di teaches in Para. [0060] the workload (i.e., job) plan 403 may include job information including job start time and job deadlines). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of providing computing resources and a workload plan and starting and a job completing ([0057] and [0060]) as taught, by Di into the teachings of ABEER in view of Doshi further in view of Huizenga invention. One would have been motivated to do so in order to the resource pool is modified to bring a resource pool parameter within a resource range in response to determining that the job forecast exceeds a job deadline which ensures that the efficiency of a job scheduling system is increased effectively. Regarding claim 5. Di further teaches wherein the reward is assigned if the job is completed before the associated deadline (Di teaches in Para. [0087] the job deadline may be a time limit for one or more jobs to be completed (i.e., a reward which indicates a function of a completion time). As described above, the job deadline may be information contained in the workload plan..., then the one or more jobs represented by the job forecast may not be completed before the job deadline with additional resources the one or more jobs may be completed earlier and may result in a job completion date prior to the job deadline (i.e., before the associated deadline)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of equally divide the allocation schedule into time slots of equal size and using one or more jobs may be completed earlier and may result in a job completion date prior to the job deadline ([0060] and [0087]) as taught, by Di into the teachings of ABEER in view of Doshi further in view of Huizenga invention. One would have been motivated to do so in order to prevent the job forecast from exceeding the job deadline more resources may be allocated to the resource pool in an efficient manner (Di. [0087]). Regarding claim 12. Claim 12 incorporates substantively all the limitation of claim 4 in a device form and is rejected under the same rationale. Furthermore, regarding the limitations of device and readable storage, the prior art of record Bahl teaches in Para. [0028] and [0030]. Regarding claim 13. Claim 13 incorporates substantively all the limitation of claim 5 in a device form and is rejected under the same rationale. Furthermore, regarding the limitations of device and readable storage, the prior art of record Bahl teaches in Para. [0028] and [0030]. Claims 6 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over ABEER in view of Doshi further in view of Huizenga further in view of Di and further in view of Nagpal et al. U.S. Pat. No. 10,089,144 B1, (hereinafter Nagpal the first). Regarding claim 6. ABEER in view of Doshi further in view of Huizenga further in view of Di teaches the method of claim 4. ABEER in view of Doshi further in view of Huizenga further in view of Di does not explicitly teach wherein the reward is a function of a completion time of the job and the associated deadline. However, Nagpal the first teaches wherein the reward is a function of a completion time of the job and the associated deadline (Nagpal the first teaches in [Col. 11, lines 51-67 and Col. 12, lines 1-9] as shown in FIG. 4 includes a technique to perform likelihood calculations 406 over an arbitrary number of resource dimensions (e.g., CPU units, network I/O, storage space, etc.)... A set of static scheduling operations (step 408) use a job specification database 412 to retrieve a time-to-finish value for each job (finish deadline time 410) to perform a static schedule of the particular job at hand for each resource dimension. Jobs that have a calculated relatively lower probability to complete by their respective assigned finish deadline times are assigned a respective weighting factor that is given in relationship to the calculated relatively lower probability. Such a weighting factor assigned to a job serves to influence the reward value that is calculated (i.e., reward function) in subsequent steps. Specifically, and as shown, when all of the then-current jobs have been statically scheduled (step 408), and all of the then-current jobs have been assigned a weight that is commensurate to a probability to finish by respective finish deadline times (step 414), then the set of jobs can be rescheduled such that highest reward jobs (reward calculation step 418 that calculates reward values based on the increased weights) are scheduled first (e.g., in a greedy fashion) over the set of then-current resources (step 420)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of providing a set of static scheduling operations (step 408) use a job specification database 412 to retrieve a time-to-finish value for each job (finish deadline time 410) to perform a static schedule of the particular job at hand for each resource dimension assigned to a job serves to influence the reward value that is calculated (i.e., reward function) in subsequent steps ([Col. 11, lines 51-67 and Col. 12, lines 1-9]) as taught, by Negpal the first into ABEER in view of Doshi further in view of Huizenga further in view of Di invention. One would have been motivated to do so since this method enables avoiding overloading a computing system with administrative tasks, thus avoiding situations where users might experience effects of an overloaded system. The method enables prioritizing jobs might need such that performance objectives are optimized to achieve service level agreement (SLA) parameters. Regarding claim 14. Claim 14 incorporates substantively all the limitation of claim 6 in a device form and is rejected under the same rationale. Furthermore, regarding the limitations of device and readable storage, the prior art of record Bahl teaches in Para. [0028] and [0030]. Claims 7, 8, 15 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over ABEER in view of Doshi further in view of Huizenga and further in view of Nagpal et al. U.S. Pub. No. 2018/0136958 A1, (hereinafter Nagpal). Regarding claim 7. ABEER in view of Doshi further in view of Huizenga teaches the method according to claim 1. ABEER in view of Doshi further in view of Huizenga does not explicitly teach wherein information representing historical resource allocation is collected for a period of time. However, Nagpal teaches wherein information representing historical resource allocation is collected for a period of time (Nagpal teaches in Para. [0025] based on historical resource usage data collected over time and/or a prediction of resources to be available at some time in the future, the assessment of past resource usage may be based on a designated period of time, e.g., the past day, week, month, or year). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of collecting historical a predicted resources in a period of time ([0025]) as taught, by Nagpal into the teachings of ABEER in view of Doshi invention. One would have been motivated to do so in order to the storage resources of the various physical host machines are virtualized into one global logically-combined storage pool that is high in reliability, availability, and performance. The significant performance advantages can be gained by allowing the virtualization system to access and utilize local storage. The degradation of the performance can be minimized. Regarding claim 8. ABEER in view of Doshi teaches the method according to claim 1. ABEER in view of Doshi wherein the period of time is a function of an estimated value of the delay for allocating resources. However, Nagpal teaches wherein the period of time is a function of an estimated value of the delay for allocating resources (Nagpal teaches in Para. [0032] the host machine's predicted actual or predicted (i.e., estimated) response time may also include the time delay in sending or receiving data over network 140 if the storage device utilized by the host machine is connected to the host machine over network 140 (e.g., cloud storage 126 or networked storage 128) and further, teaches in Para. [0046] based on historical resource usage data collected over time and/or a prediction of resources to be available at some time in the future. The available resources may represent resources available at a point in time or a prediction of available resources at some time in the future). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of predicting response time to include a delay based on historical resource usage ([0032] and [0046]) as taught, by Nagpal into the teachings of ABEER in view of Doshi further in view of Huizenga invention. One would have been motivated to do so in order to contemplate any suitable manner of determining available resources of host machines. Regarding claim 15. Claim 15 incorporates substantively all the limitation of claim 7 in a device form and is rejected under the same rationale. Furthermore, regarding the limitations of device and readable storage, the prior art of record Bahl teaches in Para. [0028] and [0030]. Regarding claim 16. Claim 16 incorporates substantively all the limitation of claim 8 in a device form and is rejected under the same rationale. Furthermore, regarding the limitations of device and readable storage, the prior art of record Bahl teaches in Para. [0028] and [0030]. Response to Arguments Section 103 rejection Applicant argues that the proposed ABEER-Doshi combination fails to disclose, teach or suggest each element of claims and discusses amended independent claim 1 as an example, they do not teach “wherein the delay is based at least in part on a boot time for allocating and configuring the increased resources”. (Remarks. Pages. 8-9). In response to the above Applicant’s argument, the Examiner respectfully disagrees because the Examiner relied prior art of record expressly teaches each element of the claimed limitations except for the newly amended portion “wherein the delay is based at least in part on a boot time for allocating and configuring the increased resources”. However, the Examiner has introduced a new prior art of record in view of Huizenga as indicated above to teach the amended portion of the limitation in the independent claim 1. Therefore, the above argument does not apply to the combination of the references being used in the current rejection and the argument is not persuasive in view of the new prior art of record. Similarly the above response to the argument for independent claim 1 applies to the independent claims 9 and 17. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BERHANU SHITAYEWOLDETSADIK whose telephone number is (571)270-7142. The examiner can normally be reached M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emmanuel Moise can be reached at 5712723865. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BERHANU SHITAYEWOLDETSADIK/Examiner, Art Unit 2455
Read full office action

Prosecution Timeline

Mar 25, 2024
Application Filed
May 17, 2025
Non-Final Rejection — §103
Aug 21, 2025
Response Filed
Oct 30, 2025
Final Rejection — §103
Jan 05, 2026
Response after Non-Final Action
Feb 04, 2026
Request for Continued Examination
Feb 09, 2026
Response after Non-Final Action
Mar 02, 2026
Examiner Interview (Telephonic)
Mar 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602246
MANAGEMENT AND ORCHESTRATION OF MICROSERVICES
2y 5m to grant Granted Apr 14, 2026
Patent 12591446
CONFIGURING VIRTUALIZATION SYSTEM IMAGES FOR A COMPUTING CLUSTER
2y 5m to grant Granted Mar 31, 2026
Patent 12585489
USING PNICS TO PERFORM FIREWALL OPERATIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12574443
SYSTEM AND METHOD FOR USE OF REMOTE PROCEDURE CALL WITH A MICROSERVICES ENVIRONMENT
2y 5m to grant Granted Mar 10, 2026
Patent 12556921
GATEWAY FUNCTION REAUTHENTICATION
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+24.5%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 377 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month