Prosecution Insights
Last updated: April 19, 2026
Application No. 17/660,145

SYSTEM AND METHOD OF ADAPTATIVE SCALABLE MICROSERVICE

Final Rejection §103
Filed
Apr 21, 2022
Examiner
LI, HARRISON
Art Unit
2195
Tech Center
2100 — Computer Architecture & Software
Assignee
DELL PRODUCTS, L.P.
OA Round
4 (Final)
82%
Grant Probability
Favorable
5-6
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
9 granted / 11 resolved
+26.8% vs TC avg
Strong +39% interview lift
Without
With
+38.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
37 currently pending
Career history
48
Total Applications
across all art units

Statute-Specific Performance

§101
20.5%
-19.5% vs TC avg
§103
46.7%
+6.7% vs TC avg
§102
6.9%
-33.1% vs TC avg
§112
21.8%
-18.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 11 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are pending. Response to Arguments Regarding: Prior Art Rejections: Applicant’s amendments and arguments have been considered and are found to be persuasive however are moot due to new ground of rejection necessitated by amendment. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 4-5, 8-9, 11-12, 14-15, and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Lepcha et al. US 20180181390 A1 in view of Botelho US 20190243682 A1 in view of Aronovich et al. US 20200167204 A1 in further view of Vijayarangan et. al US 20150248313 A1. Lepcha, Botelho, and Aronovich are cited in a prior office action. Regarding claim 1, Lepcha teaches the invention substantially as claimed including: A method, comprising: receiving, by a root actor, a new message to be processed ([0012] the scheduling platform may receive information that identifies a set of tasks, associated with a microservices application, to be executed; [0013] a scheduling platform (e.g., a server device); Examiner notes: the scheduling platform operating on the server device acts as the root actor performing the steps for task resource allocation); analyzing, by the root actor, a load factor regarding the new message (Fig. 1A, 110, 120; Fig. 1B, 100; [0046] receiving information identifying a set of tasks (i.e., workload), associated with a microservices application (i.e., actors), to be executed (block 410); [0053] In this way, scheduling platform 220 may receive information that identifies tasks to be scheduled for execution, and may determine an execution time of the tasks based on the model and parameters associated with the tasks (i.e., analyzing a load factor)) and dispatching, by the root actor, a workload of the new message to for one or more actors in a data storage platform (Fig 4 450 Provision a network device to execute the set of tasks… performed by scheduling platform; [0027] Cloud computing environment 222 may provide … storage … services), applying one or more criteria to an output of the load factor analyzing (Fig. 1C, 140; [0049] “scheduling platform 220 may determine a threshold (e.g., a threshold amount of time) based on the SLA; [0068] “scheduling platform 220 may determine whether the execution time satisfies a threshold”); based on the applying a criterion from the one or more criteria, and determining whether or not the one or more actors are able to perform the workload within a threshold (Fig. 1D, 150; [0069] if the execution time does not satisfy the threshold (block 430—NO), then process 400 may include adjusting a number of instances, of a microservice, of the microservices application (block 440)); when it is determined that none of the one or more actors are able to perform the workload within the threshold wait time, spawning an additional actor (Fig 1D; [0018] As shown in FIG. 1D, and by reference number 150, the scheduling platform may selectively adjust a number of instances, of a microservice, based on the execution time. For example, as shown, the scheduling platform may adjust a number of instances of microservice 1 (e.g., increase from 15 to 30); Examiner notes: adjusting a number of instances to increase the resources performing the work involves adding at minimum one additional instance); load balancing the workload across a group that includes both the one or more actors and the additional actor that has been spawned ([0019] more subtasks, associated with microservices 1, may execute in parallel, (i.e., load balanced across 30 instances vs 15 instances) thereby decreasing an execution time associated with the set of tasks and thereby conserving processor and/or memory resources of network devices and/or scheduling platform); and executing the workload by the group that includes both the one or more actors and the additional actor based on the load balancing (Fig 4 Adjust a number of instances -> Determine an execution time of the set of tasks; Examiner notes: there is an execution time following adjustment which indicates execution of tasks by the adjusted number of instances). Lepcha does not explicitly teach wherein each actor of the one or more actors includes a queue for messages to be serially performed by each actor. However, Botelho teaches wherein each actor of the one or more actors includes a queue for messages to be serially performed by each actor (Fig 4C Job Queues associated with different data storage nodes; [0109] The job queue may comprise a FIFO and be implemented using a memory, such as an SRAM or a DRAM; [0150] The job queue may comprise a FIFO in which the oldest job added to the job queue is processed first by the first node. The oldest job in the job queue may correspond with the head of the job queue). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined Botelho’s node associate job queues with the system of Lepcha. A person of ordinary skill in the art would have been motivated to make this combination to provide Lepcha’s system with the advantage of monitoring task load for each microservice and improving load balancing between microservices (see Botelho [0023] Technology is described for improving the real-time performance of a distributed job scheduler by reducing polling delay via job self-scheduling and improving load balancing). Lepcha and Botelho do not explicitly teach determining whether or not the one or more actors are able to perform the workload within a threshold wait time that the workload has to wait before the workload starts being processed; and when it is determined that none of the one or more actors are able to perform the workload within the threshold wait time, spawning an additional actor. However, Aronovich teaches determining whether or not the one or more actors are able to perform the workload within a threshold wait time that the workload has to wait before the workload starts being processed; and when it is determined that none of the one or more actors are able to perform the workload within the threshold wait time, spawning an additional actor ([0049] waiting state for more than requested wait duration threshold, etc. In certain embodiments, a workload 400.sub.i may be eligible to offload to allocatable host system 104 resources if the workload has been waiting for more than the requested wait threshold duration 506 for the workload class 404; Examiner notes: when a task is waiting too long (i.e., the current resources are unable to process it in a threshold time), additional resources are used to process the overwaiting task). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined Aronovich’s with the system of Lepcha and Botelho. A person of ordinary skill in the art would have been motivated to make this combination to provide Lepcha and Botelho’s system with the advantage of scaling resource provisioning to meet workload demands to reduce task wait times (see Aronovich’s [0002] Cloud bursting is the operation of offloading workloads from local hosts to remote cloud hosts. When workload resource demand exceeds a capacity of resources in local host systems in a cluster, additional cloud hosts are requested from a service provider providing cloud computing resources to provision and add to the cluster to meet the resource demand. When there is excess capacity in allocated cloud hosts, this excess capacity is returned to the cloud providers). Lepcha, Botelho, and Aronovich do not explicitly teach measuring queue performance based on metrics: performing a load analysis by measuring individual actor queue performance based on a queue depth, a latency of message processing per queue, and an average processing time for each queue. However, Vijayarangan teaches performing a load analysis by measuring individual actor queue performance ([0004] In order to comply with the SLA, the service provider may have to effectively analyze supply and demand of the computing resources, such that the jobs are executed within the pre-defined time limit set as per the SLA; ([0006] a system for determining a total processing time (T) for executing a plurality of jobs (n) is disclosed … The receiving module may be configured to receive the plurality of jobs (n), a mean processing time (.mu.), and a queue length (k). In one aspect, the mean processing time (.mu.) may be indicative of average time required for executing a job of the plurality of jobs (n). Further, the queue length (k) may be indicative of a maximum number of jobs capable of being executed by a single computing resource in a predefined time period; job time analysis in [0006] and [0007]) based on a queue depth ([0006] the queue length (k) may be indicative of a maximum number of jobs capable of being executed by a single computing resource in a predefined time period), a latency of message processing per queue ([0006] total processing time (T)), and an average processing time for each queue ([0006] the mean processing time (.mu.) may be indicative of average time required for executing a job of the plurality of jobs (n)). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined Vijayarangan’s analysis of job queue load with the existing system. A person of ordinary skill in the art would have been motivated to make this combination to provide the resulting system with the advantage of correcting resource allocations for adherence to customer service level agreements (see Vijayarangan [0004] In certain conditions, processing time required for execution of the first few set of jobs may vary from a predefined threshold time period as per the SLA. Since, there is variance in the execution of the first few set of jobs, it becomes a technical challenge to effectively plan and allocate the computing resources for the execution of the remaining set of jobs, such that the overall jobs including the first few set of jobs and the remaining set of jobs are executed in the total processing time as per the SLA). Regarding claim 2, Lepcha, Botelho, Aronovich, and Vijayarangan teach the method of claim 1. Lepcha further teaches wherein the criterion comprises an acceptable wait time ([0017] an overall execution time of the set of tasks may be 2 hours and 10 minutes (e.g., an execution time that does not satisfy the threshold of 2 hours associated with the SLA). Regarding claim 4, Lepcha, Botelho, Aronovich, and Vijayarangan teach the method of claim 1. Lepcha further teaches wherein one or more of the actors comprises a microservice, or an instance of a microservice ([0018] the scheduling platform may selectively adjust a number of instances, of a microservice). Regarding claim 5, Lepcha, Botelho, Aronovich, and Vijayarangan teaches the method as recited in claim 1, wherein the spawning and load balance operations are performed automatically based on the applying of the criterion (Fig 4 Steps 430-440; [0008] FIG. 4 is a flow chart of an example process for automatically adjusting a number of instances of a microservice based on an execution time of a set of tasks). Regarding claim 8, Lepcha, Botelho, Aronovich, and Vijayarangan teach the method of claim 1. Lepcha further teaches wherein the method is performed automatically in response to a detected increase in the workload ([0018] the scheduling platform may selectively adjust a number of instances, of a microservice, based on the execution time); [0053] scheduling platform 220 may determine an execution time based on the following regression model: PNG media_image1.png 107 294 media_image1.png Greyscale ; Examiner notes: increasing the number of tasks causes the execution time to increase. The system responds by increasing microservice instances to better handle the increase in workload). Regarding claim 9, Lepcha, Botelho, Aronovich, and Vijayarangan teach the method of claim 1. Lepcha further teaches wherein the number of actors automatically varies as a function of a size of the workload ([0018] the scheduling platform may selectively adjust a number of instances, of a microservice, based on the execution time; [0053] scheduling platform 220 may determine an execution time based on the following regression model: PNG media_image1.png 107 294 media_image1.png Greyscale ; Examiner notes: increasing the number of tasks causes the execution time to increase. The system responds by increasing microservice instances to better handle the increase in workload). Regarding claim 11, it is the non-transitory storage medium of claim 1. Therefore, it is rejected for the same reasons as claim 1. Lepcha further teaches a non-transitory storage medium having stored therein instructions that are executable by one or more hardware processors to perform operations ([0003] a non-transitory computer-readable medium may store one or more instructions that, when executed by one or more processors of a device, cause the one or more processors to …). Regarding claims 12, 14-15, 18-19 they are the non-transitory storage media of claims 2, 4-5, and 8-9 respectively. Claims 12, 14-15, 18-19 are rejected for the same reasons as claims 2, 4-5, and 8-9 respectively. Claims 3, 6, 7, 10, 13, 16, 17, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Lepcha et al. US 20180181390 A1 in view of Botelho US 20190243682 A1 in view of Aronovich et al. US 20200167204 A1 in view of Vijayarangan et. al US 20150248313 A1 in view of Muehge et al. US 20200174892 A1. Regarding claim 3, Lepcha, Botelho, Aronovich, and Vijayarangan teach the method of claim 1. Lepcha does not explicitly teach wherein the workload comprises servicing copy discovery notifications received from one or more hosts. However, Muehge teaches wherein the workload comprises servicing copy discovery notifications received from one or more hosts ([0087] backup requests received). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined Muehge’s backup requesting system with the dynamic microservice adjustment system of Lepcha. A person of ordinary skill in the art would have been motivated to make this combination to improve the resource allocation in a data backup storage system by introducing awareness about the number of backups that the system is assigned to perform (Muehge [0001] The present invention relates to resource allocation in a backup environment, and more specifically, this invention relates to resource allocation for backup operations performed on a data storage system; [0087] determining that the number of backup requests received is more than the data storage system can process at one time). Regarding claim 6, Lepcha, Botelho, Aronovich, and Vijayarangan teach the method of claim 1. Lepcha does not explicitly teach wherein determining whether or not any additional actors are needed comprises measuring a queue performance of one or more of the actors. However, Muehge further teaches measuring a queue performance of one or more of the actors ([0057] the measured actual wait time information may indicate how long the backup request has been in the backup queue waiting to be fulfilled). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined Muehge’s measuring of queue wait time with the dynamic microservice adjustment system of Lepcha. A person of ordinary skill in the art would have been motivated to make this combination to improve the resource allocation in a data backup storage system by introducing awareness about the timeliness of the backups that the system is assigned to perform and adjust resources according to how long the backup requests are in the queue (Muehge [0001] The present invention relates to resource allocation in a backup environment, and more specifically, this invention relates to resource allocation for backup operations performed on a data storage system; [0057] Measured actual wait time information may specify how long an associated backup request has been waiting to be fulfilled, e.g., as determined by a server of the data storage system). Regarding claim 7, Lepcha, Botelho, Aronovich, Vijayarangan, and Muehge teach the method of claim 6. Muehge further teaches wherein measuring the queue performance of one of the actors comprises determining, for that actor, a queue depth ([0017] the ordering of the backup requests within the backup queue), and a latency of copy discovery notification processing for the queue whose depth has been determined ([0057] the measured actual wait time information may indicate how long the backup request has been in the backup queue waiting to be fulfilled). Regarding claim 10, Lepcha, Botelho, Aronovich, and Vijayarangan teach the method of claim 1. Lepcha does not explicitly teach wherein the load factor comprises a number of copy discovery notifications incoming to the data storage platform. However, Muehge teaches wherein the load factor comprises a number of copy discovery notifications incoming to the data storage platform ([0087] determining that the number of backup requests received is more than the data storage system can process at one time …). Regarding claims 13, 16-17, and 20 they are the non-transitory storage medium of claims 3, 6-7, and 10 respectively. Claims 13, 16-17, and 20 are rejected for the same reasons as claims 3, 6-7, and 10 respectively. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HARRISON LI whose telephone number is (703) 756-1469. The examiner can normally be reached Monday-Friday 9:30am-6pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee Li can be reached on 571-272-4169. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /H.L./ Examiner, Art Unit 2195 /Aimee Li/Supervisory Patent Examiner, Art Unit 2195
Read full office action

Prosecution Timeline

Apr 21, 2022
Application Filed
Nov 25, 2024
Non-Final Rejection — §103
Mar 05, 2025
Response Filed
Apr 21, 2025
Final Rejection — §103
Jun 18, 2025
Interview Requested
Jul 23, 2025
Request for Continued Examination
Jul 27, 2025
Response after Non-Final Action
Aug 22, 2025
Non-Final Rejection — §103
Nov 17, 2025
Examiner Interview Summary
Nov 17, 2025
Applicant Interview (Telephonic)
Nov 24, 2025
Response Filed
Feb 23, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12547428
PAGE TRANSITION DETECTION USING SCREEN OPERATION HISTORY
2y 5m to grant Granted Feb 10, 2026
Patent 12517737
METHODS FOR DYNAMICALLY GENERATING GENERATIVE OPERATING SYSTEMS BASED ON HARDWARE AND SOFTWARE ENVIRONMENT FEATURE
2y 5m to grant Granted Jan 06, 2026
Patent 12379971
RELIABILITY-AWARE RESOURCE ALLOCATION METHOD AND APPARATUS IN DISAGGREGATED DATA CENTERS
2y 5m to grant Granted Aug 05, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+38.9%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 11 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month