Prosecution Insights
Last updated: April 19, 2026
Application No. 17/544,976

COMPUTER-READABLE RECORDING MEDIUM STORING PROGRAM, COMPUTER, AND LEARNING METHOD USING PERFORMANCE ALLOCATION

Final Rejection §101§103§112
Filed
Dec 08, 2021
Examiner
ZECHER, CORDELIA P K
Art Unit
2100
Tech Center
2100 — Computer Architecture & Software
Assignee
Fujitsu Limited
OA Round
2 (Final)
50%
Grant Probability
Moderate
3-4
OA Rounds
3y 8m
To Grant
76%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
253 granted / 509 resolved
-5.3% vs TC avg
Strong +26% interview lift
Without
With
+25.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
287 currently pending
Career history
796
Total Applications
across all art units

Statute-Specific Performance

§101
19.0%
-21.0% vs TC avg
§103
46.8%
+6.8% vs TC avg
§102
13.1%
-26.9% vs TC avg
§112
16.0%
-24.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 509 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement filed on 11/15/2024 is in compliance with the provisions of 37 CFR 1.97 and is being considered by the examiner. Specification The disclosure is objected to because of the following informalities: Reference character 18 in FIG. 9 is not mentioned throughout the specification Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION. —The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 1, 7, and 13 recite the terms “before at” and “allreduce timing” in “allocating a number of batches according to the performance of each of the plurality of nodes to the each of the plurality of nodes before at a predetermined allreduce timing, a node with a lower processing performance being allocated to a smaller number of batches compared to a node with a higher performance”. It is unclear whether the allocation is performed before a predetermined timing or “at” a predetermined timing, thus rendering the claims indefinite. Additionally, it is unclear what an “allreduce timing” is, since there is no definition of “allreduce timing” provided within the claims or throughout the specification, thus rendering the claims indefinite. For examination purposes, “allreduce timing” will be interpreted to mean the “before a specified time has elapsed” in accordance to [0127] in the specification, thus the limitation overall will be read as “allocating a number of batches according to the performance of each of the plurality of nodes to each of the plurality of nodes before a specified time has elapsed, a node with a lower processing performance being allocated to a smaller number of batches compared to a node with a higher performance”. Claims 1, 7, and 13 also mention “a preset number of batches for the plurality of nodes” and “execution batches executed by the allocating in the plurality of nodes” in “adjusting a learning rate to be used for the learning according to a ratio of a preset number of batches for the plurality of nodes to the number of execution batches executed by the allocating in the plurality of nodes or a ratio of the preset number to a number of execution batches executed before a predetermined timing”. It is unclear whether “a preset number of batches for the plurality of nodes” is referring to the number of batches that was initially allocated to each node in the limitation “allocating a number of batches according to the performance of each of the plurality of nodes….” or whether the preset number of batches is determined using a different metric, thus rendering the term indefinite. Moreover, it is also unclear whether the “execution batches” refer to the batches that was allocated to each node in “allocating a number of batches according to the performance of each of the plurality of nodes….” or whether different batches were determined during the runtime of a deep learning process, thus rendering the claim indefinite. Moreover, claim 1 recites “after the learning” in “adjusting a learning rate to be used for the learning according to a ratio of a preset number of batches for the plurality of nodes to the number of execution batches executed by the allocating in the plurality of nodes or a ratio of the preset number to a number of execution batches executed before a predetermined timing after the learning”. It is unclear whether “adjusting a learning rate” is done “after the learning” or “batches executed before a predetermined timing” is performed “after the learning” thus rendering the claim indefinite. For examination purposes, it will be interpreted that “batches executed before a predetermined timing” is performed “after the learning”. Claims 2-6, 8-12, and 14-18 are also rejected under 35 U.S.C. 112(b) due to their dependency on claims 1, 7, and 13 respectively. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1: Subject Matter Eligibility Analysis Step 1: Claim 1 recites “A non-transitory computer-readable medium storing a program….” which is an article of manufacture, one of the four statutory categories of patentable subject matter. Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 1 recites the steps: “determining a performance of each of a plurality of nodes”: This could involve a human mentally determining a performance of each node (by measuring the runtime, accuracy, etc.). Thus, this is a mental process. “allocating a number of batches according to the performance of each of the plurality of nodes to the each of the plurality of nodes before at a predetermined allreduce timing, a node with a lower processing performance being allocated to a smaller number of batches compared to a node with a higher performance”: This could involve a human allocating a number of batches via pen and paper to each node based on their performance such that nodes with lower performance receive smaller number of batches compared to nodes with higher performance. Hence, this is a mental process. “adjusting a learning rate to be used for the learning according to a ratio of a preset number of batches for the plurality of nodes to a number of execution batches executed by the allocating in the plurality of nodes or a ratio of the preset number to a number of execution batches executed before a predetermined timing after the learning”: This could involve a human calculating a ratio of a preset number of batches from the allocation of (I) to the number of execution batches or counting the number of batches executed before the predetermined timing from (I) then using either of these metrics to adjust a learning rate, therefore this is a mental process. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 1 recites the additional element: “A non-transitory computer-readable recording medium storing a program for causing a computer to execute a procedure….”: This element does not integrate the abstract ideas above into a practical application because it merely recites a computer component for executing the mental processes above (MPEP 2106.05(f)). “learning by each of the plurality of nodes in deep learning on the allocated number of batches.”: This element does not integrate the abstract ideas from Step 2A Prong 1 into a practical application because it merely applies the learning through generic computing components (batches) (MPEP 2106.05(f)). Therefore claim 1 is directed to the abstract ideas. Subject Matter Eligibility Analysis Step 2B: The additional element in claim 1 does not provide significantly more than the abstract ideas stated above taken alone and in combination because: “ “A non-transitory computer-readable recording medium storing a program for causing a computer to execute a procedure….”: This element merely recites a computer component for executing the mental processes from Step 2A Prong 1 (MPEP 2106.05(f)). “learning by each of the plurality of nodes in deep learning on the allocated number of batches”: This element merely applies the learning through generic computing components (batches) (MPEP 2106.05(f)). Since there is no nexus between the additional elements that could cause the combination to provide an inventive concept, claim 1 is subject-matter ineligible. Regarding claim 2: Subject Matter Eligibility Analysis Step 1: Claim 2 is an article of manufacture as in claim 1. Subject Matter Eligibility Analysis Step 2A Prong 1: In addition to the mental processes in claim 1, claim 2 recites the steps: “measuring the performance and the allocation or terminating the learning every predetermined number of iterations of the learning”: This could involve a human mentally measuring the performance and allocation for each node from claim 1 or mentally terminating a learning process at every predetermined number of iterations. Therefore, this is a mental process. Thus, claim 2 is recites abstract ideas. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 2 recites the same additional elements as claim 1 and thus the analysis is identical to that of claim 1. Therefore claim 2 is directed to the abstract ideas. Subject Matter Eligibility Analysis Step 2B: Since claim 2 recites the same elements as claim 1 and the analysis for Step 2B is identical to that of claim 1. Moreover, since there is no nexus between the additional elements that could cause the combination to provide an inventive concept, claim 2 is subject-matter ineligible. Regarding claim 3: Subject Matter Eligibility Analysis Step 1: Claim 3 is an article of manufacture as in claim 1. Subject Matter Eligibility Analysis Step 2A Prong 1: In addition to the mental processes in claim 1, claim 3 recites the steps: “measuring the performance and the allocation or terminating the learning every predetermined time”: This could involve a human mentally measuring the performance and allocation for each node from claim 1 or mentally terminating a learning process at every predetermined time. Therefore, this is a mental process. Thus, claim 3 recites abstract ideas. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 3 recites the same additional elements as claim 1 and thus the analysis is identical to that of claim 1. Therefore claim 3 is directed to the abstract ideas. Subject Matter Eligibility Analysis Step 2B: Since claim 3 recites the same elements as claim 1 and the analysis for Step 2B is identical to that of claim 1. Moreover, since there is no nexus between the additional elements that could cause the combination to provide an inventive concept, claim 3 is subject-matter ineligible. Regarding claim 4: Subject Matter Eligibility Analysis Step 1: Claim 4 is directed to an article of manufacture as in claim 1. Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 4 discloses the same mental processes as claim 1 and further recites: “…. the predetermined timing is a timing when a number of batches executed by a first node of the plurality of nodes has reached a predetermined number since start of the learning”: This further modifies the mental processes from claim 1 by defining the “predetermined timing” and a human can use this definition to execute the mental processes in claim 1. Therefore, this is also a mental process. Claim 4 therefore recites abstract ideas. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 4 recites the same additional elements as claim 1 thus claim 4 is directed to the abstract ideas. Subject Matter Eligibility Analysis Step 2B: Claim 4 recites the same additional elements as claim 1, thus the analysis for Step 2B is identical to that of claim 1. Since there is no nexus between the additional elements that could cause the combination to provide an inventive concept, claim 4 is subject-matter ineligible. Regarding claim 5: Subject Matter Eligibility Analysis Step 1: Claim 5 is an article of manufacture as in claim 1. Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 5 discloses the same mental processes as claim 1 and further recites: “…. the predetermined timing is a timing when a predetermined time has elapsed since the start of the learning”: This further modifies the mental processes from claim 1 by defining the “predetermined timing” and a human can use this definition to execute the mental processes in claim 1. Therefore, this is also a mental process. Claim 5 therefore recites abstract ideas. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 5 recites the same additional elements as claim 1 thus claim 5 is directed to the abstract ideas. Subject Matter Eligibility Analysis Step 2B: Claim 5 recites the same additional elements as claim 1, thus the analysis for Step 2B is identical to that of claim 1. Since there is no nexus between the additional elements that could cause the combination to provide an inventive concept, claim 5 is subject-matter ineligible. Regarding claim 6: Subject Matter Eligibility Analysis Step 1: Claim 6 is an article of manufacture as in claim 1. Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 6 discloses the same mental processes as claim 1 and further recites: “…. the predetermined timing is a timing when a number of batches executed by all the plurality of nodes has reached a predetermined number since the start of learning”: This further modifies the mental processes from claim 1 by defining the “predetermined timing” and a human can use this definition to execute the mental processes in claim 1. Therefore, this is also a mental process. Claim 6 therefore recites abstract ideas. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 6 recites the same additional elements as claim 1 thus claim 6 is directed to the abstract ideas. Subject Matter Eligibility Analysis Step 2B: Claim 6 recites the same additional elements as claim 1, thus the analysis for Step 2B is identical to that of claim 1. Since there is no nexus between the additional elements that could cause the combination to provide an inventive concept, claim 6 is subject-matter ineligible. Regarding claim 7: Subject Matter Eligibility Analysis Step 1: Claim 7 recites “A computer including a processor to execute a procedure”, which is an article of manufacture, one of the four statutory categories of patentable subject matter. Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 7 recites the steps: “determining a performance of each of a plurality of nodes”: This could involve a human mentally determining a performance of each node (by measuring the runtime, accuracy, etc.). Thus, this is a mental process. “allocating a number of batches according to the performance of each of the plurality of nodes to the each of the plurality of nodes before at a predetermined allreduce timing, a node with a lower processing performance being allocated to a smaller number of batches compared to a node with a higher performance”: This could involve a human allocating a number of batches via pen and paper to each node based on their performance such that nodes with lower performance receive smaller number of batches compared to nodes with higher performance. Hence, this is a mental process. “adjusting a learning rate to be used for the learning according to a ratio of a preset number of batches for the plurality of nodes to a number of execution batches executed by the allocating in the plurality of nodes or a ratio of the preset number to a number of execution batches executed before a predetermined timing after the learning”: This could involve a human calculating a ratio of a preset number of batches from the allocation of (I) to the number of execution batches or counting the number of batches executed before the predetermined timing from (I) then using either of these metrics to adjust a learning rate, therefore this is a mental process. Therefore, claim 7 recites abstract ideas. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 7 recites the additional element: “learning by each of the plurality of nodes in deep learning on the allocated number of batches.”: This element does not integrate the abstract ideas from Step 2A Prong 1 into a practical application because it merely applies the learning through generic computing components (batches) (MPEP 2106.05(f)). Therefore claim 7 is directed to the abstract ideas. Subject Matter Eligibility Analysis Step 2B: The additional element in claim 7 does not provide significantly more than the abstract ideas stated above taken alone and in combination because: “learning by each of the plurality of nodes in deep learning on the allocated number of batches.”: This element does not integrate the abstract ideas from Step 2A Prong 1 into a practical application because it merely applies the learning through generic computing components (batches) (MPEP 2106.05(f)). Since there is no nexus between the additional elements that could cause the combination to provide an inventive concept, claim 7 is subject-matter ineligible. Regarding claim 8: Claim 8 is an article of manufacture as in claim 7 and is subject-matter ineligible for the same reasons as claim 2. Regarding claim 9: Claim 9 is an article of manufacture as in claim 7 and is subject-matter ineligible for the same reasons as claim 3. Regarding claim 10: Claim 10 is an article of manufacture as in claim 7 and is subject-matter ineligible for the same reasons as claim 4. Regarding claim 11: Claim 11 is an article of manufacture as in claim 7 and is subject-matter ineligible for the same reasons as claim 5. Regarding claim 12: Claim 12 is an article of manufacture as in claim 7 and is subject-matter ineligible for the same reasons as claim 6. Regarding claim 13: Subject Matter Eligibility Analysis Step 1: Claim 13 recites “A learning method for causing a computer to execute a procedure….” which is a process, one of the four statutory categories of patentable subject matter. Subject Matter Eligibility Analysis Step 2A Prong 1: Claim 13 recites the steps: “determining a performance of each of a plurality of nodes”: This could involve a human mentally determining a performance of each node (by measuring the runtime, accuracy, etc.). Thus, this is a mental process. “allocating a number of batches according to the performance of each of the plurality of nodes to the each of the plurality of nodes before at a predetermined allreduce timing, a node with a lower processing performance being allocated to a smaller number of batches compared to a node with a higher performance”: This could involve a human allocating a number of batches via pen and paper to each node based on their performance such that nodes with lower performance receive smaller number of batches compared to nodes with higher performance. Hence, this is a mental process. “adjusting a learning rate to be used for the learning according to a ratio of a preset number of batches for the plurality of nodes to a number of execution batches executed by the allocating in the plurality of nodes or a ratio of the preset number to a number of execution batches executed before a predetermined timing after the learning”: This could involve a human calculating a ratio of a preset number of batches from the allocation of (I) to the number of execution batches or counting the number of batches executed before the predetermined timing from (I) then using either of these metrics to adjust a learning rate, therefore this is a mental process. Therefore, claim 13 recites abstract ideas. Subject Matter Eligibility Analysis Step 2A Prong 2: Claim 13 recites the additional element: “learning by each of the plurality of nodes in deep learning on the allocated number of batches.”: This element does not integrate the abstract ideas from Step 2A Prong 1 into a practical application because it merely applies the learning through generic computing components (batches) (MPEP 2106.05(f)). Therefore claim 13 is directed to the abstract ideas. Subject Matter Eligibility Analysis Step 2B: The additional element in claim 13 does not provide significantly more than the abstract ideas stated above taken alone and in combination because: “learning by each of the plurality of nodes in deep learning on the allocated number of batches.”: This element merely applies learning through generic computing components (batches) (MPEP 2106.05(f)). . Since there is no nexus between the additional elements that could cause the combination to provide an inventive concept, claim 13 is subject-matter ineligible. Regarding claim 14: Claim 14 is a process as in claim 13 and is subject-matter ineligible for the same rationale as claims 2 and 8. Regarding claim 15: Claim 15 is a process as in claim 13 and is subject-matter ineligible for the same rationale as claims 3 and 9. Regarding claim 16: Claim 16 is a process as in claim 13 and is subject-matter ineligible for the same rationale as claims 4 and 10. Regarding claim 17: Claim 17 is a process as in claim 13 and is subject-matter ineligible for the same rationale as claims 5 and 11. Regarding claim 18: Claim 18 is a process as in claim 13 and is subject-matter ineligible for the same rationale as claims 6 and 12. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-18 are rejected under 35 U.S.C. as being unpatentable over Basu et al. (US20210271520A1) in view of Ito (JP6528893B1). Regarding claim 1, Basu et al. discloses “A non-transitory computer-readable medium storing a program for causing a computer to execute a procedure, the procedure comprising:” (Basu et al. [0040] “System memory 28 ' can include computer system readable media in the form of volatile memory, such as random access memory (RAM) 30 ' and / or cache memory 32 ‘. Computer system / server 12 ' may further include other removable / non - removable, volatile / non - volatile computer system storage media”) “determining a performance of each of a plurality of nodes.” (Basu et al. [0002] “determining a plurality of runtime estimations for running the at least one deep learning job, wherein the plurality of runtime estimations corresponds to runtime estimation combinations having differing batch sizes and differing numbers of nodes for running the at least one deep learning job”; “runtime estimations” correspond to “a performance” “differing number of nodes” corresponds to “each of a plurality of nodes”); “allocating a number of batches according to the performance of each of the plurality of nodes to the each of the plurality of nodes before at a predetermined allreduce timing, a node with a lower processing performance being allocated to a smaller number of batches compared to a node with a higher performance” (Basu et al. [0030] “….utilizing the known single node runtime characteristics 204 and the communication latency 205 of the distributed system, the runtime estimation engine can estimate runtimes based upon utilizing different numbers of nodes and batch sizes to run the job. The engine can then build a runtime estimation table or list 208 of runtimes in view of number of nodes and batch sizes. This table or list identifies how long it will take to run the job when utilizing differing numbers of nodes with differing batch sizes. In building this list or table, the system also takes, as input, characteristics of running jobs 207”; FIG. 2 displays a table 208 in which each row denotes a node; the node with the runtime estimate of “24h” is allocated a batch-size of “64” and the node with the runtime estimate of “9h” is allocated with a batch-size of “256”, thus this corresponds to “a node with a lower processing performance being allocated to a smaller number of batches compared to a node with a higher performance” in which runtime corresponds to “processing performance”; “utilizing different numbers of nodes and batch sizes” and “runtimes in view of number of nodes and batch sizes” correspond to “allocating a number of batches according to the performance of each of the plurality of nodes to each of the plurality of nodes” in which “runtimes” correspond to “the performance”); and “learning by each of the plurality of nodes in deep learning on the allocated number of batches” (Basu et al. [0002] “receiving at least one deep learning job for scheduling and running on a distributed system comprising a plurality of nodes, wherein at least a subset of the plurality of nodes works together to run a deep learning job; receiving, for the at least one deep learning job, a batch size range indicating a minimum batch size and a maximum batch size that can be utilized for running the at least one deep learning job”; “plurality of nodes works together to run a deep learning job” corresponds to “learning by each of the plurality of nodes in deep learning”; “for the at least one deep learning job, a batch size range” corresponds to “deep learning on the allocated number of batches”); and “adjusting a learning rate to be used for the learning according to a ratio of a preset number of batches for the plurality of nodes to a number of execution batches executed by the allocating in the plurality of nodes or a ratio of the preset number to a number of execution batches executed before a predetermined timing after the learning” (Basu et al. . [0030] “….utilizing the known single node runtime characteristics 204 and the communication latency 205 of the distributed system, the runtime estimation engine can estimate runtimes based upon utilizing different numbers of nodes and batch sizes to run the job. The engine can then build a runtime estimation table or list 208 of runtimes in view of number of nodes and batch sizes. This table or list identifies how long it will take to run the job when utilizing differing numbers of nodes with differing batch sizes. In building this list or table, the system also takes, as input, characteristics of running jobs 207”; FIG. 2 displays a table 208 in which each row denotes a node; the node with the runtime estimate of “24h” is allocated a batch-size of “64” and the node with the runtime estimate of “9h” is allocated with a batch-size of “256”; [0035] ““In other words, if the system identifies, based upon the list, that the distributed system has the necessary processing resources for running both the new job and the current jobs, the new job is scheduled with an identified number of nodes and batch sizes. Scheduling the new job may also include adjusting the current jobs. Adjusting the current jobs may include adjusting the number of nodes and / or a batch size and / or hyperparameters for one or more of the current jobs to account for running the new job”; [0026] “Hyper -parameters are those parameters whose value is usually set before the learning process or running is started as opposed to those parameters that are derived via training. Hyper - parameters may affect the computational speed and predictive quality of the job and some examples include learning rate, momentum, batch size, size of the model, and the like.”; “adjusting the…hyperparameters” and “Hyper-parameters…include learning rate” correspond to “adjusting a learning rate to be used for learning”; “utilizing different numbers of nodes and batch sizes” and “runtimes in view of number of nodes and batch sizes” correspond to “a preset number of batches for the plurality of nodes” and “a number of execution batches executed by the allocating in the plurality of nodes”) [note: based on the 112 rejections above “a preset number of batches for the plurality of nodes” and “a number of execution batches executed by the allocating in the plurality of nodes” are both interpreted to mean the number of allocated batches for each node; additionally the limitation recites the term “or”, hence the limitation can be read as “adjusting a learning rate to be used for the learning according to a ratio of a preset number of batches for the plurality of nodes to a number of execution batches executed by the allocating in the plurality of nodes”] Basu et al. fails to disclose the limitations: “allocating a number of batches according to the performance of each of the plurality of nodes to the each of the plurality of nodes before at a predetermined allreduce timing, a node with a lower processing performance being allocated to a smaller number of batches compared to a node with a higher performance”; However, Ito discloses: “allocating a number of batches according to the performance of each of the plurality of nodes to the each of the plurality of nodes before at a predetermined allreduce timing, a node with a lower processing performance being allocated to a smaller number of batches compared to a node with a higher performance” (Ito, [0049] “Then, the DNN processor repeats S11 until learning of all mini-batches is completed (NO in S61, S10, S 63). When the learning of all the mini-batches is completed (YES in S11), the process returns to the first S12 and the learning of all the mini-batches is repeated until the predetermined number of times is reached (NO in S50)”; [0019] “A plurality of teacher data are divided into a plurality of mini-batches and input data of a plurality of teacher data of each mini-batch are input.”; “predetermined number of times” corresponds to “a predetermined allreduce timing”; since learning on the mini-batches are “repeated until the predetermined number of times is reached” and the mini-batches are divided prior to learning, this corresponds to “before at a predetermined allreduce timing”)[note: as stated in the 112 rejections above, “before at a predetermined allreduce timing” is interpreted to be read as “before a specified time has elapsed”]; and Basu et al. and Ito are analogous to the claimed invention because they both are in the same field of utilizing deep learning on a plurality of nodes and adjusting learning rates. Therefore, it would have been obvious of someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Basu et al. to incorporate the step of allocating batches to each node before a predetermined time and adjusting a learning rate as disclosed by Ito. Doing so would prevent data overflow and/or underflow and increase accuracy (Ito [0008]). Regarding claim 2, the rejections of claim 1 are incorporated. Basu et al. further discloses: “wherein the procedure includes measuring the performance and the allocation or terminating the learning every predetermined number of iterations of the learning” (Basu et al., [0028] “ After running the new job offline, the runtime estimation engine is able to identify single node runtime characteristics 204, for example, the length of time required to run the job on a single node, the performance characteristics of the node required to meet that run time, and the like”) [note: the “or” in this limitation is treated as disjoint, hence the limitation can be just read as “measuring the performance and the allocation.”]. Regarding claim 3, the rejections of claim 1 are incorporated. Basu et al. further discloses: “wherein the procedure includes measuring the performance and the allocation or terminating the learning every predetermined time” (Basu et al., [0028] “ After running the new job offline, the runtime estimation engine is able to identify single node runtime characteristics 204, for example, the length of time required to run the job on a single node, the performance characteristics of the node required to meet that run time, and the like”) [note: the “or” in this limitation is treated as disjoint, hence the limitation can be just read as “measuring the performance and the allocation.”]. Regarding claim 4, the rejections of claim 1 are incorporated. Basu et al. fails to teach: “the predetermined timing is a timing when a number of batches executed by a first node of the plurality of nodes has reached a predetermined number since start of the learning” However, Ito teaches: “the predetermined timing is a timing when a number of batches executed by a first node of the plurality of nodes has reached a predetermined number since start of the learning” (Ito, [0049] “Then, the DNN processor repeats S11 until learning of all mini-batches is completed (NO in S61, S10, S 63). When the learning of all the mini-batches is completed (YES in S11), the process returns to the first S12 and the learning of all the mini-batches is repeated until the predetermined number of times is reached (NO in S50)”) Basu et al. and Ito are analogous to the claimed invention because they both are in the same field of utilizing deep learning on a plurality of nodes and adjusting learning rates. Therefore, it would have been obvious of someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Basu et al. to incorporate a predetermined timing that is based on the time it takes for the number of batches being executed on a singular node to reach a predetermined threshold [or number] as taught by Ito. Doing so would prevent data overflow and/or underflow (Ito [0008]). Regarding claim 5 the rejections for claim 1 are incorporated. Basu et al. does not teach the limitation: “the predetermined timing is timing when a predetermined time has elapsed since start of the learning” However, Ito et al. teaches the limitation “the predetermined timing is timing when a predetermined time has elapsed since start of the learning (Ito, [0045]- [0049] “FIG. 7 is a diagram illustrating a flowchart of deep learning (DL)…Then, the DNN processor repeats S11 until learning of all mini-batches is completed (NO in S61, S10, S63). When the learning of all the mini-batches is completed (YES in S11), the process returns to the first S12 and the learning of all the mini-batches is repeated until the predetermined number of times is reached (NO in S50)”; [0014] “FIG. 10 is a diagram illustrating a flowchart of processing by a plurality of processors in deep learning of the comparative example of FIG. 7”; “predetermined number of times” corresponds to “a predetermined time”) Basu et al. and Ito are analogous to the claimed invention because they both are in the same field of utilizing deep learning on a plurality of nodes and adjusting learning rates. Therefore, it would have been obvious of someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Basu et al. to incorporate a predetermined timing during a learning process as taught by Ito. Doing so would prevent data overflow and/or underflow (Ito [0008]). Regarding claim 6, the rejections of claim 1 are incorporated. Basu et al. fails to disclose the limitation “the predetermined timing is timing when a number of batches executed by all the plurality of nodes has reached a predetermined number since start of the learning” However, Ito discloses the limitation “the predetermined timing is timing when a number of batches executed by all the plurality of nodes has reached a predetermined number since start of the learning” (Ito, [0045]- [0049] “FIG. 7 is a diagram illustrating a flowchart of deep learning (DL)…Then, the DNN processor repeats S11 until learning of all mini-batches is completed (NO in S61, S10, S63). When the learning of all the mini-batches is completed (YES in S11), the process returns to the first S12 and the learning of all the mini-batches is repeated until the predetermined number of times is reached (NO in S50)”; [0014] “FIG. 10 is a diagram illustrating a flowchart of processing by a plurality of processors in deep learning of the comparative example of FIG. 7”; “plurality of processors” corresponds to “plurality of nodes”) Basu et al. and Ito are analogous to the claimed invention because they both are in the same field of utilizing deep learning on a plurality of nodes and adjusting learning rates. Therefore, it would have been obvious of someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Basu et al. to incorporate a predetermined timing that is based on the time it takes for the number of batches being executed on all nodes in a system to reach a predetermined threshold [or number] as taught by Ito. Doing so would prevent data overflow and/or underflow (Ito [0008]). Regarding claim 7, Basu et al. teaches “A computer including a processor to execute a procedure, the procedure comprising:” (Basu et al., [0042] “Computer system/server 12’ may also communicate with at least one external device… It should be understood that although not shown, other hardware and / or software components could be used in conjunction with computer system / server 12 ‘. Examples include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data archival storage systems, etc.”; a “processing unit” correspond to a “processor”) “determining a performance of each of a plurality of nodes.” (Basu et al. [0002] “determining a plurality of runtime estimations for running the at least one deep learning job, wherein the plurality of runtime estimations corresponds to runtime estimation combinations having differing batch sizes and differing numbers of nodes for running the at least one deep learning job”; “runtime estimations” correspond to “a performance” “differing number of nodes” corresponds to “each of a plurality of nodes”); “allocating a number of batches according to the performance of each of the plurality of nodes to the each of the plurality of nodes before at a predetermined allreduce timing, a node with a lower processing performance being allocated to a smaller number of batches compared to a node with a higher performance” (Basu et al. [0030] “….utilizing the known single node runtime characteristics 204 and the communication latency 205 of the distributed system, the runtime estimation engine can estimate runtimes based upon utilizing different numbers of nodes and batch sizes to run the job. The engine can then build a runtime estimation table or list 208 of runtimes in view of number of nodes and batch sizes. This table or list identifies how long it will take to run the job when utilizing differing numbers of nodes with differing batch sizes. In building this list or table, the system also takes, as input, characteristics of running jobs 207”; FIG. 2 displays a table 208 in which each row denotes a node; the node with the runtime estimate of “24h” is allocated a batch-size of “64” and the node with the runtime estimate of “9h” is allocated with a batch-size of “256”, thus this corresponds to “a node with a lower processing performance being allocated to a smaller number of batches compared to a node with a higher performance” in which runtime corresponds to “processing performance”; “utilizing different numbers of nodes and batch sizes” and “runtimes in view of number of nodes and batch sizes” correspond to “allocating a number of batches according to the performance of each of the plurality of nodes to each of the plurality of nodes” in which “runtimes” correspond to “the performance”); and “learning by each of the plurality of nodes in deep learning on the allocated number of batches” (Basu et al. [0002] “receiving at least one deep learning job for scheduling and running on a distributed system comprising a plurality of nodes, wherein at least a subset of the plurality of nodes works together to run a deep learning job; receiving, for the at least one deep learning job, a batch size range indicating a minimum batch size and a maximum batch size that can be utilized for running the at least one deep learning job”; “plurality of nodes works together to run a deep learning job” corresponds to “learning by each of the plurality of nodes in deep learning”; “for the at least one deep learning job, a batch size range” corresponds to “deep learning on the allocated number of batches”); and “adjusting a learning rate to be used for the learning according to a ratio of a preset number of batches for the plurality of nodes to a number of execution batches executed by the allocating in the plurality of nodes or a ratio of the preset number to a number of execution batches executed before a predetermined timing after the learning” (Basu et al. . [0030] “….utilizing the known single node runtime characteristics 204 and the communication latency 205 of the distributed system, the runtime estimation engine can estimate runtimes based upon utilizing different numbers of nodes and batch sizes to run the job. The engine can then build a runtime estimation table or list 208 of runtimes in view of number of nodes and batch sizes. This table or list identifies how long it will take to run the job when utilizing differing numbers of nodes with differing batch sizes. In building this list or table, the system also takes, as input, characteristics of running jobs 207”; FIG. 2 displays a table 208 in which each row denotes a node; the node with the runtime estimate of “24h” is allocated a batch-size of “64” and the node with the runtime estimate of “9h” is allocated with a batch-size of “256”; [0035] ““In other words, if the system identifies, based upon the list, that the distributed system has the necessary processing resources for running both the new job and the current jobs, the new job is scheduled with an identified number of nodes and batch sizes. Scheduling the new job may also include adjusting the current jobs. Adjusting the current jobs may include adjusting the number of nodes and / or a batch size and / or hyperparameters for one or more of the current jobs to account for running the new job”; [0026] “Hyper -parameters are those parameters whose value is usually set before the learning process or running is started as opposed to those parameters that are derived via training. Hyper - parameters may affect the computational speed and predictive quality of the job and some examples include learning rate, momentum, batch size, size of the model, and the like.”; “adjusting the…hyperparameters” and “Hyper-parameters…include learning rate” correspond to “adjusting a learning rate to be used for learning”; “utilizing different numbers of nodes and batch sizes” and “runtimes in view of number of nodes and batch sizes” correspond to “a preset number of batches for the plurality of nodes” and “a number of execution batches executed by the allocating in the plurality of nodes”) [note: based on the 112 rejections above “a preset number of batches for the plurality of nodes” and “a number of execution batches executed by the allocating in the plurality of nodes” are both interpreted to mean the number of allocated batches for each node; additionally the limitation recites the term “or”, hence the limitation can be read as “adjusting a learning rate to be used for the learning according to a ratio of a preset number of batches for the plurality of nodes to a number of execution batches executed by the allocating in the plurality of nodes”] Basu et al. fails to disclose the limitations: “allocating a number of batches according to the performance of each of the plurality of nodes to the each of the plurality of nodes before at a predetermined allreduce timing, a node with a lower processing performance being allocated to a smaller number of batches compared to a node with a higher performance”; However, Ito discloses: “allocating a number of batches according to the performance of each of the plurality of nodes to the each of the plurality of nodes before at a predetermined allreduce timing, a node with a lower processing performance being allocated to a smaller number of batches compared to a node with a higher performance” (Ito, [0049] “Then, the DNN processor repeats S11 until learning of all mini-batches is completed (NO in S61, S10, S 63). When the learning of all the mini-batches is completed (YES in S11), the process returns to the first S12 and the learning of all the mini-batches is repeated until the predetermined number of times is reached (NO in S50)”; [0019] “A plurality of teacher data are divided into a plurality of mini-batches and input data of a plurality of teacher data of each mini-batch are input.”; “predetermined number of times” corresponds to “a predetermined allreduce timing”; since learning on the mini-batches are “repeated until the predetermined number of times is reached” and the mini-batches are divided prior to learning, this corresponds to “before at a predetermined allreduce timing”)[note: as stated in the 112 rejections above, “before at a predetermined allreduce timing” is interpreted to be read as “before a specified time has elapsed”]; and Basu et al. and Ito are analogous to the claimed invention because they both are in the same field of utilizing deep learning on a plurality of nodes and adjusting learning rates. Therefore, it would have been obvious of someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Basu et al. to incorporate the step of allocating batches to each node before a predetermined time and adjusting a learning rate as disclosed by Ito. Doing so would prevent data overflow and/or underflow and increase accuracy (Ito [0008]). Regarding claim 8, the rejections of claim 7 are incorporated and is rejected under 35 U.S.C 103 for the same reasons as claim 2. Regarding claim 9, the rejections of claim 7 are incorporated and is rejected under 35 U.S.C 103 for the same reasons as claim 3. Regarding claim 10, the rejections of claim 7 are incorporated and is rejected under 35 U.S.C 103 for the same reasons as claim 4. Regarding claim 11, the rejections of claim 7 are incorporated and is rejected under 35 U.S.C 103 for the same reasons as claim 5. Regarding claim 12, the rejections of claim 7 are incorporated and is rejected under 35 U.S.C 103 for the same reasons as claim 6. Regarding claim 13, Basu et al. discloses “A learning method for causing a computer to execute a procedure, the procedure comprising:.” (Basu et al. [0002] “In summary, one aspect of the invention provides a method, comprising: receiving at least one deep learning job for scheduling….”; “job” corresponds to “a procedure”) “determining a performance of each of a plurality of nodes.” (Basu et al. [0002] “determining a plurality of runtime estimations for running the at least one deep learning job, wherein the plurality of runtime estimations corresponds to runtime estimation combinations having differing batch sizes and differing numbers of nodes for running the at least one deep learning job”; “runtime estimations” correspond to “a performance” “differing number of nodes” corresponds to “each of a plurality of nodes”); “allocating a number of batches according to the performance of each of the plurality of nodes to the each of the plurality of nodes before at a predetermined allreduce timing, a node with a lower processing performance being allocated to a smaller number of batches compared to a node with a higher performance” (Basu et al. [0030] “….utilizing the known single node runtime characteristics 204 and the communication latency 205 of the distributed system, the runtime estimation engine can estimate runtimes based upon utilizing different numbers of nodes and batch sizes to run the job. The engine can then build a runtime estimation table or list 208 of runtimes in view of number of nodes and batch sizes. This table or list identifies how long it will take to run the job when utilizing differing numbers of nodes with differing batch sizes. In building this list or table, the system also takes, as input, characteristics of running jobs 207”; FIG. 2 displays a table 208 in which each row denotes a node; the node with the runtime estimate of “24h” is allocated a batch-size of “64” and the node with the runtime estimate of “9h” is allocated
Read full office action

Prosecution Timeline

Dec 08, 2021
Application Filed
May 01, 2025
Non-Final Rejection — §101, §103, §112
Aug 14, 2025
Response Filed
Oct 09, 2025
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12583466
VEHICLE CONTROL MODULES INCLUDING CONTAINERIZED ORCHESTRATION AND RESOURCE MANAGEMENT FOR MIXED CRITICALITY SYSTEMS
2y 5m to grant Granted Mar 24, 2026
Patent 12578751
DATA PROCESSING CIRCUITRY AND METHOD, AND SEMICONDUCTOR MEMORY
2y 5m to grant Granted Mar 17, 2026
Patent 12561162
AUTOMATED INFORMATION TECHNOLOGY INFRASTRUCTURE MANAGEMENT
2y 5m to grant Granted Feb 24, 2026
Patent 12536291
PLATFORM BOOT PATH FAULT DETECTION ISOLATION AND REMEDIATION PROTOCOL
2y 5m to grant Granted Jan 27, 2026
Patent 12393641
METHODS FOR UTILIZING SOLVER HARDWARE FOR SOLVING PARTIAL DIFFERENTIAL EQUATIONS
2y 5m to grant Granted Aug 19, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
50%
Grant Probability
76%
With Interview (+25.8%)
3y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 509 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month