Prosecution Insights
Last updated: April 19, 2026
Application No. 17/471,121

LEARNING APPARATUS, LEARNING METHOD, AND A NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM

Non-Final OA §101§103§112
Filed
Sep 09, 2021
Examiner
BEAN, GRIFFIN TANNER
Art Unit
2121
Tech Center
2100 — Computer Architecture & Software
Assignee
Actapio Inc.
OA Round
3 (Non-Final)
21%
Grant Probability
At Risk
3-4
OA Rounds
4y 4m
To Grant
50%
With Interview

Examiner Intelligence

Grants only 21% of cases
21%
Career Allow Rate
4 granted / 19 resolved
-33.9% vs TC avg
Strong +28% interview lift
Without
With
+28.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 4m
Avg Prosecution
45 currently pending
Career history
64
Total Applications
across all art units

Statute-Specific Performance

§101
37.7%
-2.3% vs TC avg
§103
40.4%
+0.4% vs TC avg
§102
11.2%
-28.8% vs TC avg
§112
9.7%
-30.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 19 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION This Action is responsive to Claims filed 01/21/2026. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/21/2026 has been entered. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims Claims 1-8 have been amended. Claim 9 is canceled. Claims 10-20 are new. Claims 1-8 and 10-20 are currently pending. Response to Arguments Applicant's arguments, see Pages 6-9, filed 01/21/2026, regarding the 35 U.S.C. 101 Rejection of Claims 1-8 have been fully considered but they are not persuasive. The Examiner contends the newly amended limitations of the independent claims do not integrate the interpretable abstract idea mental process steps into a practical application or recite significantly more than the abstract idea mental process steps. The steps of “divide…”, “select…”, “connect…”, and “divide…” are not recited in such a way and do not include specific structure or implementation precluding a human mind with the aid of pen and paper from performing them. The steps “receive…” and “store…” are recited highly generally and merely amount to pre- or post-extra-solution data transmittal steps and are well-understood, routine, and conventional activity for generic computer components. The newly amended “shuffle a learning order of training data within the shuffle buffer…” amounts to instructions to apply the abstract idea mental process step represented by the limitation “…by generating a random number seed and inputting the random number seed into a random function to generate a random order.” The Examiner contends, under the broadest reasonable interpretation of this claim language, the claimed “shuffle buffer” is recited highly generally, merely storing the training data in the order dictated by inputting a seed into a function to ascertain said order, which is practically performed within the human mind or with the aid of pen and paper. Under the broadest reasonable interpretation of the claim, the shuffle buffer does not perform this randomization action, instead a “random function” is used to generate this order. The recitation of a “random function” is highly generic and does not recite specific structure or implementation from a human mind from performing said random function to ascertain an order of the generic training data. The ”iteratively train…” step, in turn, amounts to instructions to apply the randomly-ordered training data. The training step itself is recited highly generally, and merely refers to generic computer components performing generic computing functions (flushing/refilling a memory buffer). See the updated 35 U.S.C. 101 Rejection below. Applicant’s arguments, see Pages 9-11, regarding the prior art rejection(s) of Claims 1-8 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Objections Claims 1, 7, and 8 objected to because of the following informalities: Claims 1, 7, and 8 recite “sequentially learning features of the the training data” Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-8 and 10-20 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The independent claims recite a “random function.” It is unclear, under the broadest reasonable interpretation of the claims, whether the function itself is random or if the function is a randomization function. The dependent claims do not clarify this difference. It is noted the Specification refers to the random function as an initialization function. Different nomenclature or additional detail would improve clarity. Claim Rejections - 35 USC § 101 The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 1-8 and 10-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more; and because the claims as a whole, considering all claim elements both individually and in combination, do not amount to significantly more than the abstract idea, see Alice Corporation Pty. Ltd. v. CLS Bank International, et al, 573 U.S. (2014). In determining whether the claims are subject matter eligible, the Examiner applies the 2019 USPTO Patent Eligibility Guidelines. (2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50, Jan. 7, 2019.) Step 1 (All Claims): Claims 1-6 and 10-20 recite a learning apparatus, which falls under the statutory category of a machine. Claim 7 recites a learning method, which falls under the statutory category of a process. Claim 8 recites a non-transitory computer-readable storage medium having stored therein a learning program, which falls under the statutory category of a manufacture. Claim 1: Step 2A – Prong 1: Claim 1 recites an abstract idea, law of nature, or natural phenomenon. The limitations of “divide the training data into a plurality of file sets in chronological order;”, “select a subset of file sets from the plurality of file sets in a random order;”, “connect the subset of file sets based on selection order to generate a training data group;”, “divide the training data group into a plurality of training data files according to a buffer size;”, and “…generating a random number seed and inputting the random number seed into a random function to generate a random order;” under the broadest reasonable interpretation, cover a mental process including an observation, evaluation, judgment or opinion that could be performed in the human mind or with the aid of pencil and paper. Dividing a training data set into chronologically ordered files is practically performed within the human mind or with the aid of pen and paper. Selecting a subset of said files is practically performed within the human mind or with the aid of pen and paper. Connecting said files based on an order is practically performed within the human mind or with the aid of pen and paper. Dividing the training data into groups based on a size is practically performed within the human mind or with the aid of pen and paper. Generating a random number seed and inputting into a generic random function is practically performed within the human mind or with the aid of pen and paper. Step 2A – Prong 2: The additional elements of claim 1 do not integrate the abstract idea into a judicial exception. The claim recites the additional elements “processor”, “file sets”, “shuffle buffer”, and “a random function” are recognized as generic computer components recited at a high level of generality. Although they have and execute instructions to perform the abstract idea itself, this also does not serve to integrate the abstract idea into a practical application as it merely amounts to instructions to "apply it." (See MPEP 2106.04(d)(2) indicating mere instructions to apply an abstract idea does not amount to integrating the abstract idea into a practical application). The additional elements of “a learning apparatus”, “training data”, and “a learning model” are recognized as non-generic computer components, but are recited at a high level of generality and are found to generally link the abstract idea to a particular technological environment or field of use (See MPEP 2106.05(h)). The additional elements recited in the limitations “receive training data for training a learning model;” and “store one of the plurality of training data files in a shuffle buffer;” amounts to mere pre- or post-extra solution activity or data gathering/transmittal steps (See MPEP 2106.05(g)). The additional elements recited in the limitation “shuffle a learning order of training data within the shuffle buffer…” is found to be instructions to apply the abstract idea of generating a random order of the training data or files (See MPEP 2106.05(f)). The additional elements recited in the limitation “iteratively train the learning model by sequentially learning features of the the training data within the shuffle buffer in the random order for each epoch, wherein the shuffle buffer is emptied and refilled with a next training data file from the plurality of training data files after completing training on the training data within the shuffle buffer.” is found to be instructions to apply the abstract idea of dividing and selecting the training data or files (See MPEP 2106.05(f)). Step 2B: The only limitation on the performance of the described method is a limitation reciting “processor”, “file sets”, “shuffle buffer”, and “a random function” These elements are insufficient to transform a judicial exception to a patentable invention because the recited elements are considered insignificant extra-solution activity (generic computer system, processing resources, links the judicial exception to a particular, respective, technological environment). The claim thus recites computing components only at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components; mere instructions to apply an exception using a generic computer component cannot provide an inventive concept (see MPEP 2106.05(f)). The additional elements of “a learning apparatus”, “training data”, and “a learning model” are recognized as non-generic computer components, but are recited at a high level of generality and are found to generally link the abstract idea to a particular technological environment or field of use (See MPEP 2106.05(h)). The additional elements recited in the limitation “receive training data for training a learning model;” and “store one of the plurality of training data files in a shuffle buffer;” are found to be well-understood, routine, or conventional pre- or post-solution activity (See WURC examples MPEP 2106.05(d)(II)(i)). The additional elements recited in the limitation “shuffle a learning order of training data within the shuffle buffer…” is found to be instructions to apply the abstract idea of generating a random order of the training data or files (See MPEP 2106.05(f)). The additional elements recited in the limitation “iteratively train the learning model by sequentially learning features of the the training data within the shuffle buffer in the random order for each epoch, wherein the shuffle buffer is emptied and refilled with a next training data file from the plurality of training data files after completing training on the training data within the shuffle buffer.” is found to be instructions to apply the abstract idea of dividing and selecting the training data or files (See MPEP 2106.05(f)). Taken alone or in ordered combination, these additional elements do not amount to significantly more than the above-identified abstract idea. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. For the reasons above, claim 1 is rejected as being directed to non-patentable subject matter under §101. This rejection applies equally to independent claims 7 and 8. Claim 7 recites similar limitations to claim 1, save for “A learning method to be executed by a learning apparatus, the method comprising:” (generic computer components), therefore, both claims are similarly rejected. Claim 8 recites similar limitations to claim 1, save for “A non-transitory computer-readable storage medium having stored therein a learning program for causing a computer to execute:” (generic computer components), therefore, both claims are similarly rejected. Dependent Claims: Claim 2 recites an abstract idea mental process step “divide the training data into file sets having a predetermined number of pieces of training data.” Claim 3 recites an abstract idea mental process step “randomly select the subset of file sets from among the plurality of file sets.” Claim 4 recites refinements to the selection mental process steps. Claim 5 recites an abstract idea mental process step “wherein a user designates a number of file sets from among the plurality of file sets for selection.” Claim 6 recites an abstract idea mental process step “select file sets that are chronologically newer in time series from among the plurality of file sets until the number of the selected file sets reaches a number designated by the user.” Claim 10 recites an abstract idea mental process step “generate a new random number seed for each epoch to prevent occurrence of a bias in the random order associated with the training data between epochs.” Claim 11 recites refinements to the generation of random seeds of Claim 10. Claim 12 recites an abstract idea mental process step “optimize the buffer size based on a comparison of model performance for a plurality of different buffer sizes.” Claim 13 recites refinements to the instructions to apply step of Claim 1. Claim 14 recites an abstract idea mental process step “select a plurality of trials in which the evaluation value satisfies a predetermined condition,” and instructions to apply said abstract idea mental process step “continue training the learning model only in the selected trials.” Claim 15 recites refinements to the instructions to apply step of Claims 1 and 13. Claim 16 recites refinements to the instructions to apply step of Claim 1. Claim 17 recites an abstract idea mental process step “connect the subset of file sets in order of selection such that pieces of training data are arranged in the selection order within the training data group.” Claim 18 recites refinements to the “shuffle buffer” additional element. Claim 19 recites an abstract idea mental process step “generate final training data as a learning target by associating the random order with the training data within the shuffle buffer.” Claim 20 recites refinements to the instructions to apply step of Claim 1. Claim Rejections - 35 USC § 103 The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-4, 7-8, 10-12, and 16-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Katsuki et al. (US 2016/0196505 A1), hereinafter Katsuki, and Zhu et al. (Entropy-Aware I/O Pipelining for Large-Scale Deep Learning on HPC Systems, 2018), hereinafter Zhu. In regards to claim 1: The present invention claims: “A learning apparatus comprising: a processor configured to:” Katsuki teaches an information processing apparatus (Fig 2, item 10). ”receive training data for training a learning model;” Katsuki teaches “First, in S110, the acquiring unit 110 acquires, as the training data, a plurality of explanatory variable sets and a label for training to be allocated to each of the data sets. For example, first, the acquiring unit 110 acquires, from the database 20 provided on the outside or the inside of the information processing apparatus 10, a moving image photographed by a drive recorder mounted on a passenger vehicle, acceleration data in time series measured by an acceleration sensor mounted on the passenger vehicle, and position data by a GPS mounted on the passenger vehicle.” ([0032]). “divide the training data into a plurality of file sets in chronological order;” Katsuki teaches receiving the training data as detailed above and “The acquiring unit 110 sets, as a plurality of data set, the moving image and the acceleration data divided into a plurality of pieces (e.g., N), and sets, as labels to be allocated, MCis measured in advance corresponding to sections.” ([0033], mapping the receiving and dividing of a data source such as a moving image (video) into pieces to the BRI of dividing the training data into generic file(s) or file sets). While Katsuki teaches the above, Katsuki fails to teach the randomization or shuffling present in the claims explicitly. The Examiner notes the proceeding steps claimed closely mirror function calls of TensorFlow’s tf.data library. References to this library are listed below, when not included as part of the prior art rejection. Zhu, however, in a similar field of endeavor, demonstrates the use of some of these functions/libraries and their known benefits before the Applicant’s effective filing date. “select a subset of file sets from the plurality of file sets in a random order;” Zhu teaches “In TensorFlow, datasets are fetched from different platforms (e.g., HDFS [41], POSIXlike file systems). Besides POSIX-like file system, Caffe also supports other storage systems such as LMDB/LevelDB [6]. In these deep learning frameworks, in order to achieve a high level of accuracy in the training model, datasets often have to be read from the backend storage multiple times in a random order.” (Introduction, reading source data in a random order would have been known in the art at the time of the Applicant’s filing). “connect the subset of file sets based on selection order to generate a training data group; divide the training data group into a plurality of training data files according to a buffer size; store one of the plurality of training data files in a shuffle buffer;” See Zhu Figure 1 (Page 147) for the dataset being stored in a read buffer, mapped for fitting into a shuffle buffer, and being stored in a shuffle buffer. Page 147, left column, final paragraph also details these steps in the context of Fig. 1. “shuffle a learning order of training data within the shuffle buffer by generating a random number seed and inputting the random number seed into a random function to generate a random order; ” See Zhu Figure 4 and 5 (Page 149) and left column (RDMA-Assisted Shuffling) for the use of a random seed each training epoch in conjunction with the aforementioned TensorFlow functions “iteratively train the learning model by sequentially learning features of the the training data within the shuffle buffer in the random order for each epoch, wherein the shuffle buffer is emptied and refilled with a next training data file from the plurality of training data files after completing training on the training data within the shuffle buffer.” Katsuki teaches “Subsequently, in S140, the training processing unit 170 trains, in each of the plurality of data sets, a prediction model in which each of the plurality of subsets is weighted and the subsets are reflected on prediction of a label.” ([0040]). In a set of previous steps, an extracting unit creates subsets from the data sets. These data sequences are taken from a time series. ([0034]-[0036]). See also Zhu “In addition, when training with SGD, it requires the training dataset to be shuffled randomly before each training epoch.” (Page 146) and “Then before each epoch, seeds are broadcast to all servers which are used to generate random lists of memory block IDs.” (Page 150, iteratively training based on new and/or reshuffled data would have been known in the art at the time of the Applicant’s filing). Zhu teaches “In this research, we propose an efficient I/O framework for large-scale deep learning on HPC systems. Our main objective is to coordinate the use of memory, communication, and I/O resources for efficient training. To this end, we design and implement an entropy-aware I/O pipeline for TensorFlow. In addition, to overcome the performance impedance of TensorFlow dataset API, we design a portable storage interface so that efficient I/O for deep learning can be enabled across a wide variety of underlying file and storage systems.” (Introduction). It would have been obvious to one of ordinary skill in the art the time of the Applicant’s filing to leverage known methods to realize known benefits from Zhu and/or Tensorflow libraries in training a system such as Katsuki’s. In regards to claim 2: The present invention claims: “wherein the processor is configured to divide the training data into file sets having a predetermined number of pieces of training data.” See the above rejection of claim 1 where [0033] of Katsuki teaches an extracting unit that divides a training dataset into multiple subsets or multiple data sequences, which reasonably reads on the BRI of a “file.” In regards to claim 3: The present invention claims: “wherein the processor is configured to randomly select the subset of file sets from among the plurality of file sets.” See the above rejection of claim 1 for how a combination of Katsuki paragraphs [0034]-[0036] and Zhu’s methods/implementation of TensorFlow’s tf.data library reasonably reads on randomly selecting data sets and/or data elements, which reasonably reads on the BRI of a “file.” In regards to claim 4: The present invention claims: “wherein the processor is configured to select file sets including data newer in time series from among the plurality of file sets.” See the above rejection of claim 1 where [0034] of Katsuki teaches dividing a data set and extracting subsets or data sequences by time series, which reads on the BRI of “data [files] newer in time series.” In regards to claim 7: Claim 7 recites similar limitations to claim 1, with the exception of “A learning method to be executed by a learning apparatus, the method comprising:” therefore; both claims are similarly rejected. In regards to claim 8: Claim 8 recites similar limitations to claim 1, with the exception of “A non-transitory computer-readable storage medium having stored therein a learning program for causing a computer to execute:” therefore; both claims are similarly rejected. In regards to claim 10: The present invention claims: “wherein the processor is configured to generate a new random number seed for each epoch to prevent occurrence of a bias in the random order associated with the training data between epochs.” Zhu uses a new seed each epoch “To generate mini-batches, DeepIO servers first create a random list of element IDs using the same seed which is shared by broadcast at the beginning of each epoch.“ (Page 149) and “Then before each epoch, seeds are broadcast to all servers which are used to generate random lists of memory block IDs” (Page 150). In regards to claim 11: The present invention claims: “wherein the processor is configured to generate the random number seeds such that the random order indicates a predetermined probability distribution.” Zhu teaches “DeepIO leverages the notion of crossentropy in the shuffling procedure. Cross-entropy is a measure of how one probability distribution diverges from a second expected probability distribution [16]. We use cross-entropy as a measure of the difference between our relaxed ordering and the fully-shuffled scheme on the probabilities of occurrence for a sequence of data elements.” (Page 148). In regards to claim 12: The present invention claims: “wherein the processor is configured to optimize the buffer size based on a comparison of model performance for a plurality of different buffer sizes.” Zhu teaches “In Fig. 8, R indicates the ratio of the shuffling memory size to the size of the entire training dataset. For example, when R = 0.25, it means that the memory size used to store the dataset for one round of random read is 25% of the entire dataset. When R = 1, it means that the entire dataset is resided on the memory indicating no uploading pipeline. The randomization level of R = 0.5 and 0.25 are 98.54% and 96.96% respectively. Therefore, in these cases, the randomization of generated mini-batches could deliver almost the same validation accuracy as shown in Fig. 8. Although the size of mini-batches changes with node counts, we can still keep high training accuracy by carefully adjusting training parameters.” (Page 151, mapping to the use of differing buffer sizes to the overall accuracy and/or randomization of the model). In regards to claim 16: The present invention claims: “wherein the processor is configured to train the learning model to learn features of the training data included in each of the file sets in order from the file set in which the training data included is older in time series.” See above how a combination of Katsuki and Zhu reads on training the model on a sequence of data, which would necessarily include learning newer and older data. In regards to claim 17: The present invention claims: “wherein the processor is configured to connect the subset of file sets in order of selection such that pieces of training data are arranged in the selection order within the training data group.” See above where Zhu Figure 1 reasonably reads on putting source data into a structure in the order they are selected. In regards to claim 18: The present invention claims: “wherein the shuffle buffer stores training data having a size equal to the buffer size.” See Zhu Figure 1 for the data being stored in a buffer according to the buffer’s size. The Examiner also notes the tf.data library has variables designed to manipulate the buffer size and/or data stored in it. In regards to claim 19: The present invention claims: “wherein the processor is configured to generate final training data as a learning target by associating the random order with the training data within the shuffle buffer.” See above where a combination of Katsuki and Zhu would be training the model(s) based on the data in the shuffle buffer. Claim(s) 5-6 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Katsuki and Zhu as applied to claim 1 above, and further in view of Przybylski et al. (US 2020/0026249 A1), hereinafter Przybylski. In regards to claim 5: While a combination of Katsuki and Zhu teaches the learning apparatus of claim 1, they fail to teach “wherein a user designates a number of file sets from among the plurality of file sets for selection.” However, Przbylski, also in the field of training machine learning models, teaches “The control system 150 may thereby facilitate a user in determining whether the model form input by the user resulted in a system model that provides satisfactorily or sufficiently accurate predictions. If the user determines that the system model does not fit the system behavior sufficiently, the user may select an option provided by the control system 150 to return to step 902, where the control system 150 prompts the user to input or edit the parameterized model form. In some embodiments, the control system 150 allows the user to select or alter various other settings, including adding filters to the training data, selecting an amount of training data captured and used, implementing a saturation detection and removal process for the training data, selecting one of various available cost functions, selecting a horizon for a multi-step ahead error prediction method, selecting a number of sets of parameter values tried by the cost function evaluation circuit 824 in minimizing the cost function, and/or various other options. Process 900 may then repeat steps 904-908 using the new parameterized model form and/or settings input by the user.” ([0128]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the teaching of Przbylski, which enables selecting amount of training data (claimed "number of sets designated by a user") into the machine learning model generation system of Katsuki because it is merely a combination of existing elements in known ways to produce predictable results (MPEP 2143). One would have been motivated to make such a combination because, as disclosed in Przbylski, it may be used to tune parameters and implement saturation detection and removal process for training data ([0128]). In regards to claim 6: While a combination of Katsuki and Zhu teaches the learning apparatus of claim 1 and dividing or extracting data in time series or chronological order, they fail to teach “wherein the processor is configured to select file sets that are chronologically newer in time series from among the plurality of file sets until the number of the selected file sets reaches a number designated by the user.” However, Przbylski teaches "…In some embodiments, the control system 150 allows the user to select or alter various other settings, including adding filters to the training data, selecting an amount of training data captured” which continues to read on the BRI of selecting files until a user-designated number. In regards to claim 20: The present invention claims: “wherein the processor is configured to repeat the iterative training for a number of epochs designated by a user.” See above how a combination of Katsuki and Zhu would read on an iterative training process. Przbylski also teaches “A maximum iterations field 1306 allows a user to determine how many times the set of parameter values cp,, are adjusted in attempting to identify the set of parameter values that minimize the cost function (i.e., as described above with reference to FIGS. 8-9).” ([0147]). Claim(s) 13-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Katsuki and Zhu as applied to claim 1 above, and further in view of Przbylski et al. (US 2020/0026249 A1), hereinafter Przybylski and Hsyu et al. (US 2021/0174210 A1), hereinafter Hsyu. In regards to claim 13: While the combination of Katsuki and Zhu read on the limitations of claim 1, they fail to explicitly teach “wherein the processor is configured to train the learning model using a plurality of trials with different hyperparameter combinations, and to perform early stopping on trials that are not expected to produce good results based on an evaluation value that evaluates accuracy of the learning model.” However, Hsyu, in a similar field of endeavor, teaches training a model with a plurality of testing hyperparameters (Figure 5), and performing early stopping based on predictive accuracies (Figure 6), and continuing to train/use the models with higher accuracy (Figure 7). Hsyu highlights the difficulty parallelizing the optimization of model hyperparameters (Backgorund). It would have been obvious to one of ordinary skill in the art at the time of the Applicant’s filing to combine the methods of Katsuki and Zhu with the hyperparameter optimization methods of Hsyu to produce more accurate models. In regards to claim 14: The present invention claims: “wherein the processor is configured to select a plurality of trials in which the evaluation value satisfies a predetermined condition, and to continue training the learning model only in the selected trials.” See above where Hsyu continues to use/train well performing or high-accuracy model(s) (Figure 7, at least). In regards to claim 15: While Hsyu makes reference to user defined parameters (Background), the combination of Katsuki, Zhu, and Hsyu does not explicitly teach “wherein the processor is configured to stop a trial when the evaluation value satisfies a constraint condition designated by a user.” However, Przbylski teaches “The control system 150 may thereby facilitate a user in determining whether the model form input by the user resulted in a system model that provides satisfactorily or sufficiently accurate predictions. If the user determines that the system model does not fit the system behavior sufficiently, the user may select an option provided by the control system 150 to return to step 902, where the control system 150 prompts the user to input or edit the parameterized model form.” ([0128]). See above how a combination of Katsuki, Zhu, and Przbylski would have been obvious to one of ordinary skill in the art. It would have been obvious to one of ordinary skill in the art at the time of the Applicant’s filing to integrate the user-level control over a target or final model’s accuracy threshold in a combination of Przbylski’s user interface and Hsyu’s multiple trial accuracies and early stop capabilities to improve the accuracy of the final model. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Abadi, Martín, et al. "{TensorFlow}: a system for {Large-Scale} machine learning." 12th USENIX symposium on operating systems design and implementation (OSDI 16). 2016. Bisong, Ekoba. Building Machine Learning and Deep Learning Models on Google Cloud Platform: A comprehensive Guide for Beginners, 2019 Murray, Derek G., et al. "tf. data: A machine learning data processing framework." arXiv preprint arXiv:2101.12127 (2021). (Wrong date, but references methods used by tf.data since 2017 (Page 2)) Any inquiry concerning this communication or earlier communications from the examiner should be directed to GRIFFIN T BEAN whose telephone number is (703)756-1473. The examiner can normally be reached M - F 7:30 - 4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li Zhen can be reached at (571) 272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GRIFFIN TANNER BEAN/ Examiner, Art Unit 2121 /Li B. Zhen/ Supervisory Patent Examiner, Art Unit 2121
Read full office action

Prosecution Timeline

Sep 09, 2021
Application Filed
Oct 28, 2024
Non-Final Rejection — §101, §103, §112
May 08, 2025
Response Filed
Jul 14, 2025
Final Rejection — §101, §103, §112
Nov 12, 2025
Response after Non-Final Action
Jan 21, 2026
Request for Continued Examination
Jan 28, 2026
Response after Non-Final Action
Feb 05, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12424302
ACCELERATED MOLECULAR DYNAMICS SIMULATION METHOD ON A QUANTUM-CLASSICAL HYBRID COMPUTING SYSTEM
2y 5m to grant Granted Sep 23, 2025
Patent 12314861
SYSTEMS AND METHODS FOR SEMI-SUPERVISED LEARNING WITH CONTRASTIVE GRAPH REGULARIZATION
2y 5m to grant Granted May 27, 2025
Patent 12261947
LEARNING SYSTEM, LEARNING METHOD, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Mar 25, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
21%
Grant Probability
50%
With Interview (+28.4%)
4y 4m
Median Time to Grant
High
PTA Risk
Based on 19 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month