Prosecution Insights
Last updated: April 19, 2026
Application No. 17/693,002

DYNAMIC MANAGEMENT OF VERSION DEPENDENCIES FOR EXECUTING PARALLEL WORKLOADS

Final Rejection §101§103§112
Filed
Mar 11, 2022
Examiner
YUN, CARINA
Art Unit
2194
Tech Center
2100 — Computer Architecture & Software
Assignee
Rubrik Inc.
OA Round
4 (Final)
50%
Grant Probability
Moderate
5-6
OA Rounds
4y 7m
To Grant
83%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
160 granted / 322 resolved
-5.3% vs TC avg
Strong +34% interview lift
Without
With
+33.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 7m
Avg Prosecution
25 currently pending
Career history
347
Total Applications
across all art units

Statute-Specific Performance

§101
17.8%
-22.2% vs TC avg
§103
47.5%
+7.5% vs TC avg
§102
8.6%
-31.4% vs TC avg
§112
21.4%
-18.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 322 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Authorization for Internet Communications The examiner encourages Applicant to submit an authorization to communicate with the examiner via the Internet by making the following statement (from MPEP 502.03): “Recognizing that Internet communications are not secure, I hereby authorize the USPTO to communicate with the undersigned and practitioners in accordance with 37 CFR 1.33 and 37 CFR 1.34 concerning any subject matter of this application by video conferencing, instant messaging, or electronic mail. I understand that a copy of these communications will be made of record in the application file.” Please note that the above statement can only be submitted via Central Fax, Regular postal mail, or EFS Web (PTO/SB/439). Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Examiner Notes Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-19, and 21 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Regarding claims 1, 10, and 19, recite ”maintaining state for each of the plurality of computing nodes during sequentially updating” and “maintaining version consistency between the parent job and child jobs” is not disclosed in applicant’s specification. Examiner notified applicant that these terms were not in the specification in interview dated 11/6/2025, and applicant had acknowledged these terms were not in the specification. Examiner suggested the claims to be aligned with the terms in the specification. Claims 2-9, 11-18, and 21 are rejected for dependency to claims 1, 10 and 19 above. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-19, and 21 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claims 1, 10, and 19, recite “based on the version state information” and it is uncertain how the based on is applied to scheduling. In what version state in the workload scheduler scheduling job allocations? Examiner notified applicant that these terms were unclear in interview dated 11/6/2025, and applicant had acknowledged these terms were unclear. Examiner suggested the claims to be clarified using language from applicants’ specification. Claims 2-9, 11-18, and 21 are rejected for dependency to claims 1, 10 and 19 above. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-19, and 21 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Regarding claim 1 this part of the eligibility analysis evaluates whether the claim falls within any statutory category. MPEP §2106.03. The claim recites a method; thus, the claim is directed to a method which is one of the statutory categories of invention. Step 2A Prong 1: This part of the eligibility analysis evaluates whether the claim recites a judicial exception. As explained in MPEP 2106.04(II) and the October 2019 Update, a claim “recites” a judicial exception when the judicial exception is “set forth” or “described” in the claim. The limitations “sequentially updating...from a first versions to a second code version,” “maintaining version state information for each of the plurality of computing nodes during the sequential updating,” and “schedules, according to a first job allocation configuration based on the version state information, a first job allocation configuration, a first job allocation for a first computing node of the plurality of computing nodes, wherein the first computing node is scheduled to perform the parent job, and wherein the first job allocation configuration prevents scheduling the parent job on a computing node running a newer code version than a set of computing nodes running the set of child jobs until the set of child jobs terminates ” and “schedules according to a second job allocation configuration, a second job allocation for a second computing node of the plurality of computing nodes, wherein the second computing node is scheduled to perform a child job of the set of child jobs, and wherein the second job allocation configuration prevents scheduling the child job on computing nodes having a newer code version than the first computing node performing the parent job” as drafted, recite functions that, under its broadest reasonable interpretation, covers functions that could reasonably be performed in the mind, including with the aid of pen and paper, but for the recitation of generic computer components. That is, the limitations as drafted, are functions that, under its broadest reasonable interpretation, recite the abstract idea of a mental process. The limitations encompass a human mind carrying out the functions through observation, evaluation, judgment and/or opinion, or even with the aid of pen and paper. Thus, these limitations recite and fall within the “Mental Processes” grouping of abstract ideas. See MPEP §2106.04(a)(2). Accordingly, claim 1 recites a judicial exception (i.e. an abstract idea). Step 2A, Prong 2, This part of the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception into a practical application of the exception. This evaluation is performed by (a) identifying whether there are any additional elements recited in the claim beyond the judicial exception, and (b) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. 2019 PEG Section III(A)(2), 84 Fed. Reg. at 54-55. The claim recites the following additional elements “the plurality of computing nodes of the computing cluster,” “executing, by one or more processors and while the plurality of computing nodes are being sequentially updated, a workload scheduler, ” “database instances running on a plurality of computing nodes” is recited at a high level of generality (i.e. generic nodes, processor, database) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Accordingly, the additional elements do not integrate the recited judicial exception into a practical application, and the claim is therefore directed to the judicial exception. See MPEP 2106.05(f). The additional element “receiving a workload related to performing a backup of data from a data source to one or more database instances running on a plurality of computing nodes of a computing cluster, the workload comprising a plurality of parallelized jobs to be executed on the plurality of computing nodes, the plurality of parallelized jobs comprising a parent job and a plurality of child jobs, and wherein the parent job is configured to control a set of execution parameters for a set of child jobs of the plurality of parallelized jobs” is merely data gathering, and the courts have identified data gathering as well understood routine and conventional, see MPEP 2106.05(d). Accordingly, the additional elements do not integrate the recited judicial exception into a practical application, and the claim is therefore directed to the judicial exception. The additional element “performing the backup of data from the data source to the one or more database instances by executing the parent job and the plurality of child jobs during the sequentially updating while maintaining version consistency between the parent job and child jobs” does not amount to a practical application, because it fails to meaningfully limit the claim because it does not require any particular application of the recited “backup of data,” and “executing parent job and the plurality of child jobs” and it is at best the equivalent of merely adding the words “apply it” to the judicial exception. See MPEP 2106.05(f). Step 2B, This part of the eligibility analysis evaluates whether the claim as a whole amounts to significantly more than the recited exception, i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim. MPEP 2106.05. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “the plurality of computing nodes of the computing cluster,” “one or more processors, ” “database instances running on a plurality of computing nodes” are merely a generic computer or generic computer components to apply the judicial exception which cannot provide an inventive concept. The claims include additional elements of “receiving a workload related to performing a backup of data from a data source to one or more database instances running on a plurality of computing nodes of a computing cluster, the workload comprising a plurality of parallelized jobs to be executed on the plurality of computing nodes, the plurality of parallelized jobs comprising a parent job and a plurality of child jobs, and wherein the parent job is configured to control a set of execution parameters for a set of child jobs of the plurality of parallelized jobs” that are not sufficient to amount to significantly more than the judicial exception because they are essentially regarding data gathering and applying method for execution. Under step 2B, the courts have identified data gathering as well understood routine and conventional. See MEPE 2106.05d. The additional element “performing the backup of data from the data source to the one or more database instances by executing the parent job and the plurality of child jobs during the sequentially updating while maintaining version consistency between the parent job and child jobs” does not amount an inventive concept, because it fails to meaningfully limit the claim because it does not require any particular application of the recited “backup of data,” and “executing parent job and the plurality of child jobs” and it is at best the equivalent of merely adding the words “apply it” to the judicial exception. See MPEP 2106.05(f). Claim 2, is a dependent claim rejected for the same reasons as claim 1. Furthermore, the claims include additional elements “further comprising: executing, by the one or more processors, the workload scheduler that schedules, according to the second job allocation configuration, the child job on a third computing node of the plurality of computing nodes based at least in part on the second computing node that is scheduled to perform the child job being upgrade to the newer code version than the first computing node scheduled to perform the parent job, wherein the third computing node has a same code version as the first computing node” does not integrate the abstract idea into a practical application, nor is significantly more than the abstract idea because it does not impose any meaningful limits on practicing the abstract idea. The claims are directed to a mental process of scheduling the child job and is an abstract idea. The additional elements of a processor, computing nodes are generic computing components to perform the judicial exception and is neither a practical application nor an inventive concept. Claim 3 is a dependent claim rejected for the same reasons as claim 1. Furthermore, the claims include additional elements “further comprising: migrating, according to the second job allocation configuration, the child job from the second computing node to a third computing node that has the newer code version as the first computing node based at least in part on the first computing node that is scheduled to perform the parent job being upgraded to the newer code version than the second computing node scheduled to perform the child job of the set of child jobs, wherein the second job allocation configuration prevents scheduling the child job on computing nodes having the newer code version than the first computing node” does not amount to a practical application or an inventive concept, because it fails to meaningfully limit the claim because it does not require any particular application of the recited “migrating” and it is at best the equivalent of merely adding the words “apply it” to the judicial exception. See MPEP 2106.05(f). Claim 4, is a dependent claim rejected for the same reasons as claim 1. Furthermore, the claims include additional elements “further comprising: terminating, according to the second job allocation configuration, the child job on the second computing node based at least in part on the first computing node scheduled to perform the parent job being upgraded” does not amount to a practical application or an inventive concept, because it fails to meaningfully limit the claim because it does not require any particular application of the recited “terminating” and it is at best the equivalent of merely adding the words “apply it” to the judicial exception. See MPEP 2106.05(f). Claim 5, is a dependent claim rejected for the same reasons as claim 1. Furthermore, the claims include additional elements “further comprising: executing, by the one or more processors, the workload scheduler that schedules, according to the first job allocation configuration, a second job allocation for a third computing node of the plurality of computing nodes based at least in part on interruption to the parent job during sequential updating of the plurality of computing nodes, wherein the third computing node is scheduled to perform the parent job, and wherein the first job allocation configuration prevents scheduling additional child jobs until active child jobs of the set of child jobs are terminated” does not integrate the abstract idea into a practical application, nor is significantly more than the abstract idea because it does not impose any meaningful limits on practicing the abstract idea. The claims are directed to a mental process of scheduling a job allocation and is an abstract idea. The additional elements of a processor, computing nodes are generic computing components to perform the judicial exception and is neither a practical application nor an inventive concept. Claim 6, is a dependent claim rejected for the same reasons as claim 1. Furthermore, the claims include additional elements “further comprising: executing, by the one or more processors, the workload scheduler that schedules, according to the first job allocation configuration, the second job allocation for a third computing node of the plurality of computing nodes based at least in part on an interruption to the parent job during sequential updating of the plurality of computing nodes, wherein the third computing node is scheduled to perform the parent job, and wherein the first job allocation configuration prevents scheduling a child job callback until active child jobs of the set of child jobs are terminated” does not integrate the abstract idea into a practical application, nor is significantly more than the abstract idea because it does not impose any meaningful limits on practicing the abstract idea. The claims are directed to a mental process of scheduling a job allocation and is an abstract idea. The additional elements of a processor, computing nodes are generic computing components to perform the judicial exception and is neither a practical application nor an inventive concept. Claim 7, is a dependent claim rejected for the same reasons as claim 1. Furthermore, the claims include additional elements “wherein the first job allocation configuration and the second job allocation configuration configure the parent job and the set of child jobs to run on a same code version” does not amount to a practical application nor an inventive concept, because it fails to meaningfully limit the claim because it does not require any particular application of the recited “configuration” and it is at best the equivalent of merely adding the words “apply it” to the judicial exception. See MPEP 2106.05(f). Claim 8, is a dependent claim rejected for the same reasons as claim 1. Furthermore, the claims include additional elements “wherein the second job allocation configuration prevents scheduling one or more queued child jobs on a computing node running a different code version than a computing node running the parent job” does not amount to a practical application nor an inventive concept, because it fails to meaningfully limit the claim because it does not require any particular application of the recited “configuration” and it is at best the equivalent of merely adding the words “apply it” to the judicial exception. See MPEP 2106.05(f). Claim 9, is a dependent claim rejected for the same reasons as claim 1. Furthermore, the claims include additional elements “wherein the plurality of parallelized jobs comprises a plurality of subtasks of a workload” does not amount to a practical application nor an inventive concept, because the additional elements recite computer instructions; these additional elements are merely instructions to implement an abstract idea on a computer. MPEP 2106.04(d). Claim 21 are dependent claims rejected for the same reasons as claim 1. Furthermore, the claims include additional elements “wherein executing the workload scheduler results in the parent job and the plurality of child jobs of the parent job being executed on computing nodes having a same code version, further comprising: storing data from the data source to the one or more database instances running on the plurality of computing nodes of the computing cluster in accordance with the parent job and the plurality of child jobs being executed on computing nodes having the same code version” does not amount to a practical application nor an inventive concept, because it fails to meaningfully limit the claim because it does not require any particular application of the recited “execution” and it is at best the equivalent of merely adding the words “apply it” to the judicial exception. See MPEP 2106.05(f). Claim 10, is rejected for the same reasons as claim 1. In particular, the claim recites two additional elements –processor and memory--. The processor and memory are recited at a high-level of generality (i.e., as a generic component) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Claims 11-18, are dependent claims rejected for the same reasons as claim 2-9. Claim 19, is an independent medium claim rejected for the same reasons as claim 1. In particular, the claim recites additional elements –a storage medium, and processor--. The medium and processor are recited at a high-level of generality (i.e., as a generic component) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Claim 20, is a dependent claim rejected for the same reasons as claim 2. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 4, 8, 9, 10, 13, 17, 18, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Miyazawa (US PG PUB 2007/0106695) in view of Mattheis (U.S. PG PUB 2014/0317636) and Vaidya et al. (U.S. PG PUB 2013/0238768). Regarding claim 1, Miyazawa teaches a method for managing code version dependencies, comprising: the plurality of parallelized jobs (see ¶[0103] “FIG. 11 is a flowchart depicting processes executing tasks in parallel, according to a new version of job flow definition information according to the embodiment. If a given task is made capable of executing in parallel on a plurality of task processing apparatuses, i.e., personal computer, multi function peripheral, or the like, it becomes possible to reduce time taken to execute the task, even for tasks that demand large amounts of processing. For example, processing time for processes such as compressing a large graphics file or the like can be reduced by processing the graphics file with a plurality of tasks (task processing apparatuses) in parallel.”) comprising a parent job and a plurality of child jobs, and wherein the parent job is configured to control a set of execution parameters for a set of child jobs of the plurality of parallelized jobs (see ¶[0112] “In Step S1602, a parent node task is sequentially extracted from job flow, and the display position of the task's icon is decided in Step S1603. In Step S1604, a determination is made whether the parent node task has a child node task or not. In the event that the parent node task does not have a child node task, processing proceeds to Step S1605, displaying a task icon for the relevant parent node in a first format. If, on the other hand, the parent node task does have a child node task, processing proceeds to Step S1606, displaying a task icon for the relevant parent node in a second format.”); sequentially updating the plurality of computing nodes of computing cluster from a first code version to a second code version while the computing cluster remains active (see ¶ [0063] “It is permissible for different tasks to possess multiple property information, depending on the task in question. "Order" dictates the sequence in which partitioned tasks are processed. For example, if Task2 were subdivided in a manner such as that shown in FIG. 5, order values would be assigned to describe an order that would made the order of processing proceed from Task2-1 to Task2-2 (see FIG. 13B)” see ¶[0082] “Beginning in Step S803, management server 11 compares tasks within the job flow definition information with tasks defined by updated task interface information (the task interface information received in Step S711). In this comparison, a determination is made as to whether the task being examined corresponds to a new version task, and whether there are changes in child node composition between the task being examined and the new version task. For example, "taskA" of task interface information in FIG. 13B and "task1" in the job flow definition information in FIG. 14A are corresponding tasks, because they both have id=0002 and application id=0001. Post-update taskA in FIG. 13B has two child nodes, task_A_1 and task_A_2, whereas job flow definition information in FIG. 14A has no child nodes. In other words, there are changes in child nodes. In the event that child node compositions have changed, processing proceed to Step S804.” See ¶[0083] “] In Step S804, management server 11 updates relevant tasks within the job flow definition information, according to updated task child nodes. In the foregoing example of FIGS. 14A and 13B, the child nodes in FIG. 13B, task_A_1 and task_A_2, have been added to task1 in FIG. 14A. In Step S805, management server 11 migrates property information to child nodes added in Step S804. That is, property information appended to each of the subdivided tasks is described in the updated task interface information. Then, in Step S806, an update flag is set for the task that was updated in Steps S804 and S805. To be more specific, an update flag (update="true") is set for the parent node task that has had updates to child nodes, as indicated by description 1411 in FIG. 14B.”), maintaining version state information for each of the plurality of computing nodes during the sequentially updating (see ¶[0082] “In this comparison, a determination is made as to whether the task being examined corresponds to a new version task, and whether there are changes in child node composition between the task being examined and the new version task.” See ¶[0094] “Descriptions 1324 and 1326 describe property information of tasks in descriptions 1323 and 1325. It is permissible for different tasks to possess multiple property information, depending on the task in question.” Note: Version information is a property of the task and is maintained in the descriptions). Miyazawa teaches while the plurality of computing nodes are being sequentially updated (see ¶ [0063] “It is permissible for different tasks to possess multiple property information, depending on the task in question. "Order" dictates the sequence in which partitioned tasks are processed. For example, if Task2 were subdivided in a manner such as that shown in FIG. 5, order values would be assigned to describe an order that would made the order of processing proceed from Task2-1 to Task2-2 (see FIG. 13B)”) and based on the version state information (see ¶[0082] “In this comparison, a determination is made as to whether the task being examined corresponds to a new version task, and whether there are changes in child node composition between the task being examined and the new version task.” See ¶[0094] “Descriptions 1324 and 1326 describe property information of tasks in descriptions 1323 and 1325. It is permissible for different tasks to possess multiple property information, depending on the task in question.”) but does not expressly disclose, however, Mattheis teaches the workload comprising a plurality of parallelized jobs to be executed on the plurality of computing nodes (see ¶[0046] “A runtime environment is provided to allocate resources such as processors and data structures and to provide a task interface. A parallel task runtime environment can be provided for parallel execution of dynamic multi-tasking computations.” See ¶[0047] “For that purpose, the task runtime environment TRE can create as many worker threads as processors can be used and pins each worker thread to exactly one processor.”), executing, by one or more processors, a workload scheduler that schedules, according to a first job allocation configuration, a first job allocation for a first computing node of the plurality of computing nodes (see ¶[0046] “The scheduling of dynamic multi-tasking computations requires scheduling steps including processor mapping and execution ordering. Besides the mechanism to map the tasks to processors and to determine the execution order, a scheduler implementation requires a mechanism for resource allocation. A runtime environment is provided to allocate resources such as processors and data structures and to provide a task interface. A parallel task runtime environment can be provided for parallel execution of dynamic multi-tasking computations.”), wherein the first computing node is scheduled to perform the parent job, and wherein the first job allocation configuration prevents scheduling the parent job on a computing node running a newer code version than a set of computing nodes running the set of child jobs until the set of child jobs terminates (see ¶[0051] “The library function spawn takes the lambda function as an argument and generates a new task. After having generated the task, the library function executes the generated task parallel to the current task. The function sync waits until all generated child tasks have been finished. By means of the scheduler, the tasks are distributed during runtime to the different processor cores.” Note: This means the parent job is prevented from being ran until the child job has finished, new task is considered a new version); and executing, by the one or more processors, a workload scheduler that schedules, according to a second job allocation configuration, a second job allocation for a second computing node of the plurality of computing nodes (see ¶[0046] “The scheduling of dynamic multi-tasking computations requires scheduling steps including processor mapping and execution ordering. Besides the mechanism to map the tasks to processors and to determine the execution order, a scheduler implementation requires a mechanism for resource allocation. A runtime environment is provided to allocate resources such as processors and data structures and to provide a task interface. A parallel task runtime environment can be provided for parallel execution of dynamic multi-tasking computations.”), wherein the second computing node is scheduled to perform a child job of the set of child jobs, and wherein the second job allocation configuration prevents scheduling the child job on computing nodes having a newer code version than the first computing node performing the parent job (see ¶[0051] “The library function spawn takes the lambda function as an argument and generates a new task. After having generated the task, the library function executes the generated task parallel to the current task. The function sync waits until all generated child tasks have been finished. By means of the scheduler, the tasks are distributed during runtime to the different processor cores.” Note: This means the parent job is prevented from being ran until the child job has finished, new task is considered a new version). Hence, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teachings of Miyazawa by adapting Mattheis for scheduling of tasks of a parallel computing system with several processor cores to increase the performance or throughput of the computing system (see ¶[0004] of Mattheis). Miyazawa and Mattheis do not expressly disclose, however,Vaidya teaches receiving a workload related to performing a backup of data from a data source to one or more database instance running on a plurality of computing nodes of a computing cluster (see ¶[0148] “In another embodiment, multiple nodes may be a master node. In some cases, one node may be a backup master node to another node.”), performing the backup of data from the data source to the one or more database instances by executing the parent job and the plurality of child jobs during the sequentially updating while maintaining version consistency between the parent job and child jobs (see ¶[0141] “Any of the sites and appliances of the environment may be arranged, configured or deployed in any type and form of hierarchical or parent, child and/or peer relationship. Any one appliance or site may be a peer to another appliance or site. For example, appliance 200A may be a peer to appliance 200B for providing GSLB domain resolution services. Any one appliance or site may be a parent node of another appliance or site. For example, appliance 200A at Site A may be a parent site or appliance to appliance 200D of Site D. Any one appliance or site may be a child node of another appliance or site. For example appliance 200F at Site F may be a child node to Site B and appliance B.” see ¶ [0121] “The parser 530 may be designed, configured or adapted to translate a configuration of one format or version (e.g., compatible with one appliance) to a configuration of another format or version (e.g., compatible with another appliance).” See ¶[0149] “In some embodiments, the distributor may download, upload or file transfer a configuration file to an appliance. In other embodiments, the distributor may email a configuration to a computing device or appliance. In some embodiments, the distributor makes remote procedure calls, such as remote shell calls from one appliance to another appliance to distribute the configuration. In another embodiments, the distributor may write configuration to any type and form of computer readable medium. In another embodiments, the configuration is distributed via a connection and a protocol supported by the appliances, such as the Metric Exchange Protocol (MEP) described below. The distributor may distribute configuration via a secure call, command or connection, such as for example, a secure SSH, a secure copy SCP or a secure file transfer protocol (SFTP).”). Hence, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teachings of Miyazawa and Mattheis by adapting Vaidya to update configurations, or synchronize them with another configuration (see ¶ [0004] of Vaidya). Regarding claim 4, Miyazawa does not expressly disclose, however, Mattheis teaches terminating, according to the second job allocation configuration, the child job on the second computing node based at least in part on the first computing node scheduled to perform the parent job being upgraded (see ¶[0051] “The library function spawn takes the lambda function as an argument and generates a new task. After having generated the task, the library function executes the generated task parallel to the current task. The function sync waits until all generated child tasks have been finished. By means of the scheduler, the tasks are distributed during runtime to the different processor cores.”). Hence, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teachings of Miyazawa by adapting Mattheis for scheduling of tasks of a parallel computing system with several processor cores to increase the performance or throughput of the computing system (see ¶[0004] of Mattheis). Regarding claim 8, Miyazawa does not expressly disclose, however, Mattheis teaches wherein the second job allocation configuration prevents scheduling one or more queued child jobs on a computing node running a different code version than a computing node running the parent job (see ¶[0051] “The library function spawn takes the lambda function as an argument and generates a new task. After having generated the task, the library function executes the generated task parallel to the current task. The function sync waits until all generated child tasks have been finished. By means of the scheduler, the tasks are distributed during runtime to the different processor cores.”). Hence, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teachings of Miyazawa by adapting Mattheis for scheduling of tasks of a parallel computing system with several processor cores to increase the performance or throughput of the computing system (see ¶[0004] of Mattheis). Regarding claim 9, Miyazawa does not expressly disclose, however, Mattheis teaches wherein the plurality of parallelized jobs comprises a plurality of subtasks of a workload (see ¶[0051] “The library function spawn takes the lambda function as an argument and generates a new task. After having generated the task, the library function executes the generated task parallel to the current task. The function sync waits until all generated child tasks have been finished. By means of the scheduler, the tasks are distributed during runtime to the different processor cores.”). Hence, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teachings of Miyazawa by adapting Mattheis for scheduling of tasks of a parallel computing system with several processor cores to increase the performance or throughput of the computing system (see ¶[0004] of Mattheis). Regarding claim 10, is an independent apparatus claim corresponding to method claim 1 above, and are rejected for the same reasons. In addition, Miyazawa teaches a processor (see ¶[0130] CPU); and memory coupled with the processor (see ¶[0133] memory); and instructions stored in memory and executable by the processor (see ¶[0132]). Regarding claim 13, 17, and 18, correspond with claims 3, 8, 9 above, and are rejected for the same reasons. Regarding claim 19, is an independent medium claim corresponding to method claim 1 above, and are rejected for the same reasons. In addition, Miyazawa teaches a non-transitory medium stored code (see ¶[0131]). Claim(s) 2, 3, 11, and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Miyazawa (US PG PUB 2007/0106695) in view of Mattheis (U.S. PG PUB 2014/0317636) and Vaidya et al. (U.S. PG PUB 2013/0238768), as applied in claims 1, 10 above, further in view of Joshi et al. (U.S. PG PUB 2005/0267951). Regarding claim 2, Miyazawa teaches the child job, the parent job (see ¶ [0083] “In Step S804, management server 11 updates relevant tasks within the job flow definition information, according to updated task child nodes. In the foregoing example of FIGS. 14A and 13B, the child nodes in FIG. 13B, task_A_1 and task_A_2, have been added to task1 in FIG. 14A. In Step S805, management server 11 migrates property information to child nodes added in Step S804. That is, property information appended to each of the subdivided tasks is described in the updated task interface information. Then, in Step S806, an update flag is set for the task that was updated in Steps S804 and S805. To be more specific, an update flag (update="true") is set for the parent node task that has had updates to child nodes, as indicated by description 1411 in FIG. 14B.”). Miyazawa does not expressly disclose, however, Mattheis teaches executing, by the one or more processors, the workload scheduler that schedules, according to the second job allocation configuration (see ¶[0046] “The scheduling of dynamic multi-tasking computations requires scheduling steps including processor mapping and execution ordering. Besides the mechanism to map the tasks to processors and to determine the execution order, a scheduler implementation requires a mechanism for resource allocation. A runtime environment is provided to allocate resources such as processors and data structures and to provide a task interface. A parallel task runtime environment can be provided for parallel execution of dynamic multi-tasking computations.”). Hence, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teachings of Miyazawa by adapting Mattheis for scheduling of tasks of a parallel computing system with several processor cores to increase the performance or throughput of the computing system (see ¶[0004] of Mattheis). Miyazawa, Mattheis, and Vaidya do not expressly disclose, however, Joshi teaches the job on a third computing node of the plurality of computing nodes based at least in part on the second computing node that is scheduled to perform the job being upgrade to the newer code version than the first computing node scheduled to perform the job (see ¶ [0005] “In one embodiment of the invention, a system and methods are provided for facilitating a rolling upgrade of distributed software from a relatively older version to a relatively newer version. In this embodiment, multiple versions of the software can operate on different nodes, and the rolling upgrade may take any amount of time to complete (e.g., hours, days, months, years).”), wherein the third computing node has a same code version as the first computing node (see ¶[0021] “However, all group nodes operate the software at a common level, termed the Acting Version (AV). The AV of the software is a version that can be supported by each node in the cluster.” See ¶ [0022] “Any node whose SV.gtoreq.AV (i.e., the node's software version is higher or newer than the acting version) will operate the software according to the AV, not its SV. Thus, it may continue to support functionality, data formats and other characteristics of the AV, and disable or suppress functionality provided in the SV that is not supported in the AV.”). Hence, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teachings of Miyazawa, Mattheis, and Vaidya by adapting Joshi to avoid conflicts when performing rolling updates (see ¶ [0022] of Joshi). Regarding claim 3, Miyazawa teaches migrating, according to the second job allocation configuration, the child job from the second computing node to a third computing node (see ¶ [0083] “In Step S804, management server 11 updates relevant tasks within the job flow definition information, according to updated task child nodes. In the foregoing example of FIGS. 14A and 13B, the child nodes in FIG. 13B, task_A_1 and task_A_2, have been added to task1 in FIG. 14A. In Step S805, management server 11 migrates property information to child nodes added in Step S804. That is, property information appended to each of the subdivided tasks is described in the updated task interface information. Then, in Step S806, an update flag is set for the task that was updated in Steps S804 and S805. To be more specific, an update flag (update="true") is set for the parent node task that has had updates to child nodes, as indicated by description 1411 in FIG. 14B.”); the child job, the parent job (see ¶ [0083] “In Step S804, management server 11 updates relevant tasks within the job flow definition information, according to updated task child nodes. In the foregoing example of FIGS. 14A and 13B, the child nodes in FIG. 13B, task_A_1 and task_A_2, have been added to task1 in FIG. 14A. In Step S805, management server 11 migrates property information to child nodes added in Step S804. That is, property information appended to each of the subdivided tasks is described in the updated task interface information. Then, in Step S806, an update flag is set for the task that was updated in Steps S804 and S805. To be more specific, an update flag (update="true") is set for the parent node task that has had updates to child nodes, as indicated by description 1411 in FIG. 14B.”). Miyazawa does not expressly disclose, however, Mattheis teaches executing, by the one or more processors, the workload scheduler that schedules, according to the second job allocation configuration (see ¶[0046] “The scheduling of dynamic multi-tasking computations requires scheduling steps including processor mapping and execution ordering. Besides the mechanism to map the tasks to processors and to determine the execution order, a scheduler implementation requires a mechanism for resource allocation. A runtime environment is provided to allocate resources such as processors and data structures and to provide a task interface. A parallel task runtime environment can be provided for parallel execution of dynamic multi-tasking computations.”). Hence, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teachings of Miyazawa by adapting Mattheis for scheduling of tasks of a parallel computing system with several processor cores to increase the performance or throughput of the computing system (see ¶[0004] of Mattheis). Miyazawa, Mattheis, and Vaidya do not expressly disclose, however, Joshi a third computing node that has the newer code version as the upgraded first computing node based at least in part on the first computing node that is scheduled to perform the job (see ¶ [0005] “In one embodiment of the invention, a system and methods are provided for facilitating a rolling upgrade of distributed software from a relatively older version to a relatively newer version. In this embodiment, multiple versions of the software can operate on different nodes, and the rolling upgrade may take any amount of time to complete (e.g., hours, days, months, years).”) being to the newer code version than the second computing node scheduled to perform the job of the set of jobs, wherein the second job allocation configuration prevents scheduling the job on computing nodes having the newer code version than the first computing node (see ¶[0021] “However, all group nodes operate the software at a common level, termed the Acting Version (AV). The AV of the software is a version that can be supported by each node in the cluster.” See ¶ [0022] “Any node whose SV.gtoreq.AV (i.e., the node's software version is higher or newer than the acting version) will operate the software according to the AV, not its SV. Thus, it may continue to support functionality, data formats and other characteristics of the AV, and disable or suppress functionality provided in the SV that is not supported in the AV.”). Hence, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teachings of Miyazawa, Mattheis, and Vaidya by adapting Kagami to allocate computing resources, which are specified by “node”“time”, to the plurality of jobs, thereby determining which nodes and what time to execute each job (see ¶ [0004] of Kagami). Regarding claim 11, correspond with claim 2 above, and is rejected for the same reasons. Regarding claim 12, correspond with claim 3 above, and is rejected for the same reasons. Claim(s) 7, 16, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Miyazawa (US PG PUB 2007/0106695) in view of Mattheis (U.S. PG PUB 2014/0317636) and Vaidya et al. (U.S. PG PUB 2013/0238768), as applied in claims 1, 10 above, further in view of Deblaquiere et al. (U.S. PG PUB 20006/0184927). Regarding claim 7, Miyazawa, Mattheis, and Vaidya do not expressly disclose, however, Deblaquiere teaches wherein the first job allocation configuration and the second job allocation configuration configure the parent job and the set of child jobs to run on a same code version (see ¶[0080] “In stage 504, update service 104 may identify one or more of the software updates as a parent update. For example, update module 206 may consider the update that was requested as the parent update. Update module 206 may also analyze the configuration of the installed software products and determine that one or more parent updates are required. For example, update module 206 may query update database 216 to determine if there are updates available for the installed software products. Update module 206 may then query dependency data 212 and determine the respective dependencies between any of the available updates. Based on these dependencies, update module 206 may then determine that one or more parent updates should be selected. Processing then flows to stage 506.”). Hence, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teachings of Miyazawa, Mattheis, and Vaidya by adapting Deblaquiere for determining which software updates from various software vendors are available, determining which of those updates are applicable to a given user or enterprise, and installing selected updates on the user's or enterprise's system (see ¶ [0008] of Deblaquiere). Regarding claim 16, correspond with claim 7 above, and are rejected for the same reasons. Regarding claim 21, Miyazawa, Mattheis, and Vaidya do not expressly disclose, however, Deblaquiere teaches wherein executing the workload scheduler results in the parent job and the plurality of child jobs of the parent job being executed on computing nodes having a same code version, further comprising: storing data from the data source to the one or more database instances running on the plurality of computing nodes of the computing cluster in accordance with the parent job and the plurality of child jobs being executed on computing nodes having the same code version (see ¶[0068] “Update module 206 may make entries into update database 216 to include the URL or network location of servers for software vendors 102 that can provide the update, to store the software update itself, the file format of the software update, and the installation process. Also, a URL to a description about the software update, such as the problems that the update fixes or features added may be stored in update database 216. Of course, one skilled in the art will recognize that either software vendors 102 or update service 104 may specify the entries that are made into update database 216. Processing may then flow to stage 402.”). Hence, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teachings of Miyazawa, Mattheis, and Vaidya by adapting Deblaquiere for determining which software updates from various software vendors are available, determining which of those updates are applicable to a given user or enterprise, and installing selected updates on the user's or enterprise's system (see ¶ [0008] of Deblaquiere). Claim(s) 5, 6, 14, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Miyazawa (US PG PUB 2007/0106695) in view of Mattheis (U.S. PG PUB 2014/0317636) and Vaidya et al. (U.S. PG PUB 2013/0238768), as applied in claims 1, 10 above, further in view of Kagami et al. (U.S. PG PUB 2016/0299795). Regarding claim 5, Mattheis teaches wherein the third computing node is scheduled to perform the parent job, and wherein the first job allocation configuration prevents scheduling additional child jobs until active child jobs of the set of child jobs are terminated (see ¶[0051] “The library function spawn takes the lambda function as an argument and generates a new task. After having generated the task, the library function executes the generated task parallel to the current task. The function sync waits until all generated child tasks have been finished. By means of the scheduler, the tasks are distributed during runtime to the different processor cores.”). Miyazawa, Mattheis, and Vaidya do not expressly disclose, however, Kagami teaches further comprising: executing, by the one or more processors, the workload scheduler that schedules, according to the first job allocation configuration, a second job allocation for a third computing node of the plurality of computing nodes based at least in part on interruption to the parent job during sequential updating of the plurality of computing nodes (see ¶[0012] “As described above, the efficiency of the use of nodes in a parallel computing system may be increased by performing job scheduling involving job interruptions. However, if a schedule plan involving transfer of data on a job between nodes is considered, a problem arises in the accuracy in the estimation of transfer period. In the case where there is a node located between a transfer source node and a transfer destination node, a transfer period may greatly vary due to other jobs running on the node and communication of the node for the other jobs. Therefore, if a transfer period is estimated on the basis of static information such as hardware performance, it is likely that there causes a big error between the estimated transfer period and the actual transfer period.”). Hence, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teachings of Miyazawa, Mattheis, and Vaidya by adapting Kagami to allocate computing resources, which are specified by “node”“time”, to the plurality of jobs, thereby determining which nodes and what time to execute each job (see ¶ [0004] of Kagami). Regarding claim 6, Mattheis teaches wherein the first job allocation configuration prevents scheduling a child job callback until active child jobs of the set of child jobs are terminated (see ¶[0051] “The library function spawn takes the lambda function as an argument and generates a new task. After having generated the task, the library function executes the generated task parallel to the current task. The function sync waits until all generated child tasks have been finished. By means of the scheduler, the tasks are distributed during runtime to the different processor cores.”). Miyazawa, Mattheis, and Vaidya do not expressly disclose, however, Kagami teaches further comprising: executing, by the one or more processors, the workload scheduler that schedules, according to the first job allocation configuration, the second job allocation for a third computing node of the plurality of computing nodes based at least in part on an interruption to the parent job during sequential updating of the plurality of computing nodes, wherein the third computing node is scheduled to perform the parent job (see ¶[0012] “As described above, the efficiency of the use of nodes in a parallel computing system may be increased by performing job scheduling involving job interruptions. However, if a schedule plan involving transfer of data on a job between nodes is considered, a problem arises in the accuracy in the estimation of transfer period. In the case where there is a node located between a transfer source node and a transfer destination node, a transfer period may greatly vary due to other jobs running on the node and communication of the node for the other jobs. Therefore, if a transfer period is estimated on the basis of static information such as hardware performance, it is likely that there causes a big error between the estimated transfer period and the actual transfer period.”), and Hence, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to modify the teachings of Miyazawa, Mattheis, and Vaidya by adapting Kagami to allocate computing resources, which are specified by “node”“time”, to the plurality of jobs, thereby determining which nodes and what time to execute each job (see ¶ [0004] of Kagami). Regarding claims 14, 15, correspond with claims 5, and 6 above, and are rejected for the same reasons. Response to Arguments Applicant's arguments filed 11/20/2025 have been fully considered but they are not persuasive. Regarding 101 rejections applicants argue the claims are patent eligible because they do not recite a judicial exception for the reasons that the claim limitations cannot be done in the mind, because it is not equipped to execute parent and child jobs on multiple computing nodes of a computing cluster and back up data from a data source to one or more database instances. For example, the invention relates to real-time monitoring of code versions across multiple distributed computing nodes during active rolling upgrades, dynamic coordination of parent-child job relationships while maintaining version consistency across a distributed cluster, concurrent execution of backup operations while nodes are being sequentially upgraded from one code version to another, and automated prevention of metadata corruption through version-aware job scheduling algorithms. Such operations require specialized distributed computing capabilities and clearly cannot practically be performed in the human mind, even with pen and paper assistance. Applicant further argues the recited features integrate any alleged judicial exception into a practical application because it improves the functioning of a computer or improves another technology or technical field of performing data backups during rolling upgrades and mitigating risk of forward and backward compatibility, etc. Examiner disagrees. The claimed limitations recite an abstract idea, because the limitations as drafted, are functions that, under its broadest reasonable interpretation, recite the abstract idea of a mental process. The limitations encompass a human mind carrying out the functions through observation, evaluation, judgment and/or opinion, or even with the aid of pen and paper. Thus, these limitations recite and fall within the “Mental Processes” grouping of abstract ideas. See MPEP §2106.04(a)(2). For example, scheduling a first job allocation, and scheduling, a second job allocation can be done via one’s mind with aid of pen and paper. The “computing nodes,” “computing cluster,” “parallelized jobs,” mentioned in the claims are generic computing components and thus is not significantly more than the abstract idea itself. These additional elements are merely instructions to implement an abstract idea on a computer. MPEP 2106.04(d). Applicants stated “real-time monitoring of code versions across multiple distributed computing nodes during active rolling upgrades” is not recited in the claims. The claims mention receiving a workload related to performing a backup of data.. etc. which is considered a form of data gathering which the courts have ruled to be well-known routine and conventional. Applicants stated “dynamic coordination of parent-child job relationships while maintaining version consistency across a distributed cluster, concurrent execution of backup operations while nodes are being sequentially upgraded from one code version to another, and automated prevention of metadata corruption through version-aware job scheduling algorithms” is not explicitly recited in the claims, the claims recite “maintaining version state information” which examiner states can be done in ones mind as a person can have a version state information maintained in a piece of paper. In addition, the claims recite a backup of data from one source to another. This step does not amount to a practical application nor inventive concept, because it fails to meaningfully limit the claim because it does not require any particular application of the recited “backup of data,” and “executing parent job and the plurality of child jobs” and it is at best the equivalent of merely adding the words “apply it” to the judicial exception. See MPEP 2106.05(f). Regarding 103 rejections, applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. In addition, examiner has cited new art; thus, arguments do not apply. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Parkison et al. (U.S. PG PUB 2014/0006354) teaches techniques for executing a cloud command for a distributed filesystem. Two or more cloud controllers collectively manage distributed filesystem data that is stored in one or more cloud storage systems; the cloud controllers ensure data consistency for the stored data, and each cloud controller caches portions of the distributed filesystem. During operation, a cloud controller presents a distributed-filesystem-specific capability to a client system as a file in the distributed filesystem (e.g., using a file abstraction). Upon receiving a request from the client system to access and/or operate upon this file, the client controller executes an associated cloud command. More specifically, the cloud controller initiates a specially-defined operation that accesses additional functionality for the distributed filesystem that exceeds the scope of individual reads and writes to a typical data file. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CARINA YUN whose telephone number is (571)270-7848. The examiner can normally be reached Mon, Tues, Thurs, 9-4 (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to call. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kevin Young can be reached on (571) 270-3180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Carina Yun Patent Examiner Art Unit 2194 /CARINA YUN/Examiner, Art Unit 2194 /KEVIN L YOUNG/Supervisory Patent Examiner, Art Unit 2194
Read full office action

Prosecution Timeline

Mar 11, 2022
Application Filed
Oct 15, 2024
Non-Final Rejection — §101, §103, §112
Jan 21, 2025
Response Filed
Feb 10, 2025
Final Rejection — §101, §103, §112
Apr 09, 2025
Examiner Interview Summary
Apr 09, 2025
Applicant Interview (Telephonic)
May 14, 2025
Request for Continued Examination
May 22, 2025
Response after Non-Final Action
Aug 18, 2025
Non-Final Rejection — §101, §103, §112
Nov 03, 2025
Applicant Interview (Telephonic)
Nov 03, 2025
Examiner Interview Summary
Nov 20, 2025
Response Filed
Jan 12, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12578996
ADAPTIVE HIGH-PERFORMANCE TASK DISTRIBUTION FOR MANAGING COMPUTING RESOURCES ON CLOUD
2y 5m to grant Granted Mar 17, 2026
Patent 12572398
CONSOLE COMMAND COMPOSITION
2y 5m to grant Granted Mar 10, 2026
Patent 12554562
INTERSYSTEM PROCESSING EMPLOYING BUFFER SUMMARY GROUPS
2y 5m to grant Granted Feb 17, 2026
Patent 12498996
HYBRID PAGINATION FOR RETRIEVING DATA
2y 5m to grant Granted Dec 16, 2025
Patent 12474974
SYSTEMS AND METHODS FOR POWER MANAGEMENT FOR MODERN WORKSPACES
2y 5m to grant Granted Nov 18, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
50%
Grant Probability
83%
With Interview (+33.5%)
4y 7m
Median Time to Grant
High
PTA Risk
Based on 322 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month