Prosecution Insights
Last updated: April 19, 2026
Application No. 17/386,812

SOFTWARE MODEL TESTING FOR MACHINE LEARNING MODELS

Final Rejection §101§103
Filed
Jul 28, 2021
Examiner
STORK, KYLE R
Art Unit
2128
Tech Center
2100 — Computer Architecture & Software
Assignee
BANK OF AMERICA CORPORATION
OA Round
4 (Final)
64%
Grant Probability
Moderate
5-6
OA Rounds
4y 0m
To Grant
92%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
554 granted / 865 resolved
+9.0% vs TC avg
Strong +28% interview lift
Without
With
+28.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
51 currently pending
Career history
916
Total Applications
across all art units

Statute-Specific Performance

§101
14.9%
-25.1% vs TC avg
§103
58.5%
+18.5% vs TC avg
§102
12.1%
-27.9% vs TC avg
§112
6.1%
-33.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 865 resolved cases

Office Action

§101 §103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This final office action is in response to the amendment filed 28 October 2025. Claims 1-20 are pending. Claims 1, 8, and 15 are independent claims. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 remain rejected under 35 U.S.C. 101 because the claimed inventions are directed to an abstract idea without significantly more. When considering subject matter eligibility under 35 USC 101, it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter (Step 1; MPEP 2106.03). If the claim falls within one of the statutory categories, the second step in the analysis is to determine whether the claim is directed toward a judicial exception (Step 2A; MPEP 2106.04). This step is broken into two prongs. The first prong (Step 2A, Prong 1) determines whether or not the claims recite a judicial exception (e.g., mathematical concepts, mental processes, certain methods of organizing human activity). If it is determined at Step 2A, Prong 1 that the claims recite a judicial exception, the analysis proceeds to the second prong (Step 2A, Prong 2; MPEP 2106.04). The second prong (Step 2A, Prong 2) determines whether the claims integrate the judicial exception into a practical application. If the claims do not integrate the judicial exception into a practical application, the analysis proceeds to determine whether the claim is a patent-eligible exception (Step 2B; MPEP 2106.05). If an abstract idea is present int the claim, in order to recite statutory subject matter, any element or combination of elements in the claim must be sufficient to ensure that the claim integrates the judicial exception into a practical application or amounts to significantly more than the abstract idea itself (see: 2019 PEG). Step 1 for all claims Under the first part of the analysis, claims 1-7 and claims 8-14 recite a machine and a method, respectively, which fall within the four statutory categories and the analysis for those claims now proceeds to Step 2A, Prongs 1 and 2, and then Step 2B. Claims 15-20 are rejected for reciting “a non-transitory computer-readable medium”, thus encompassing signals per se and being directed to non-statutory subject matter (See, e.g., Mentor Graphics v. EVE-USA, Inc). Claim 1: Step 2A, prong 1: Following the determination that the claims fall within one of the statutory categories (Step 1), it must be determined if the claims recite a judicial exception (Step 2A, Prong 1). In this instance, the claims are determined to recite a judicial exception (abstract idea; mental process). With respect to claim 1, the claim recites the following elements: “transform the first training data into a first set of hexadecimal values, wherein each hexadecimal value is a base-16 numerical value,” which is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind, with perhaps the help of pencil and paper, and is thus an abstract idea. For example, the context of this claim encompasses a person performing a calculation to transform data to hexadecimal. Further, this limitation recites a mathematic concept. “output a first classification value based on the first set of hexadecimal values”, which is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind, with perhaps the help of pencil and paper, and is thus an abstract idea. For example, the context of this claim encompasses a person deciding upon a label for a given hexadecimal value. “transform the first test data into a second set of hexadecimal values,” which is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind, with perhaps the help of pencil and paper, and is thus an abstract idea. For example, the context of this claim encompasses a person performing a calculation to transform data to hexadecimal. Further, this limitation recites a mathematic concept. “transform the second training data into a third set of hexadecimal values,” which is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind, with perhaps the help of pencil and paper, and is thus an abstract idea. For example, the context of this claim encompasses a person performing a calculation to transform data to hexadecimal. Further, this limitation recites a mathematic concept. “output a second classification value based on the third set of hexadecimal values,” which is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind, with perhaps the help of pencil and paper, and is thus an abstract idea. For example, the context of this claim encompasses a person deciding upon a label for a given hexadecimal value. “transforming the second test data into a fourth set of hexadecimal values,” which is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind, with perhaps the help of pencil and paper, and is thus an abstract idea. For example, the context of this claim encompasses a person performing a calculation to transform data to hexadecimal. Further, this limitation recites a mathematic concept. “determining first performance metrics for the first machine learning model based at least in part on a performance of the processor while the first machine learning model generates the first classification value, the first performance metrics comprising a first level of accuracy, a first number of features used, and a first processing time associated with the first machine learning model” which is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind, with perhaps the help of pencil and paper, and is thus an abstract idea. For example, the context of this claim encompasses a person selecting a measure by which to evaluate model performance upon receiving processor performance information. “determining second performance metrics for the second machine learning model based at least in part on a performance of the processor while the second machine learning model generates the second classification value, the second performance metrics comprising a second level of accuracy, a second number of features used, and a second processing time associated with the second machine learning model, wherein testing each of the first machine learning model and the second machine learning model in hexadecimal format allows a comparison among the first machine learning model and the second machine learning model that would otherwise require different data types,” which is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind, with perhaps the help of pencil and paper, and is thus an abstract idea. For example, the context of this claim encompasses a person selecting a measure by which to evaluate model performance upon receiving processor performance information. “compare the first performance metrics for the first machine learning model with the second performance metrics for the second machine learning model” (abstract idea; This is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind, with perhaps the help of pencil and paper, and is thus an abstract idea. For example, the context of this claim encompasses a person performing an evaluation compare the performance of two machine learning models) “determine, based at least in part on the comparison, that the first machine learning model yields a higher performance compared to the second machine learning model” (abstract idea; This is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind, with perhaps the help of pencil and paper, and is thus an abstract idea. For example, the context of this claim encompasses a person performing an evaluation compare the performance of two machine learning models) “generate a model comparison report that comprises the first performance metrics for the first machine learning model and the second performance metrics for the second machine learning model, and information to recreate the first machine learning model, the information comprising weights and biases of neural network layers of the first machine learning model”, which is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind, with perhaps the help of pencil and paper, and is thus an abstract idea. For example, the context of this claim encompasses a person selecting and aggregating a portion of performance metric information. Step 2A, prong 2: Accordingly, after determining that a claim recites a judicial exception in Step 2A Prong One, examiners should evaluate whether the claim as a whole integrates the recited judicial exception into a practical application of the exception in Step 2A Prong Two. A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception (MPEP 2106.04(d)). The claim further recites the additional elements of: “a first training data having a first data type; and a first test data having the first data type” “a second training data having a second data type; and a second test data having the second data type, wherein the first data type is different from the second data type,” “generate the set of machine learning models based on the hyperparameters, wherein: the number of machine learning models within the set of machine learning models is equal to the quantity value,” “each machine learning model is uniquely configured based on the settings identified by the hyperparameters,” “generating the set of machine learning models comprises: generate a first machine learning model according to a first set of hyperparameters comprising a first number of neural network layers, a first epoch value, and a first tolerance level” “generate a second machine learning model according to a second set of hyperparameters comprising a second number of neural network layers, a second epoch value, and a second tolerance level” “train a first machine learning model from the set of machine learning models using the first set of … values, wherein training the first machine learning model from the set of machine learning models configures the first machine learning model to” “train a second machine learning model from the set of machine learning models using the third set of… values, wherein training the second machine learning model configures the second machine learning model to: receive the third set of … values as an input,” “test the second machine learning model, by: inputting the fourth set of … values into the second machine learning model and obtaining the second classification value from the second machine learning model,” “recreate the first machine learning model using the information from the model comparison report” “execute the trained and tested machine learning model by executing the executable file” “generate, by the trained and tested first machine learning model, an output classification value based on a new input dataset” These additional elements amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use. As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception do not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application. Additionally, the claim recites the additional elements: “a processor operably coupled to the memory and configured to,” “a second processor communicatively coupled with the processor and associated with the user device” The additional element is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)). Additionally, the claim recites the additional elements: “receive a user input that identifies: a first machine learning model type; and hyperparameters comprising: a quantity value identifying a number of machine learning models for a set of machine learning models; and settings for each machine learning model within the set of machine learning models” “receive … values as an input” “testing the first machine learning model by: … inputting the second set of … values into the first machine learning model within the set of machine learning models” “obtaining a first classification value from the first machine learning model within the set of machine learning models,” “generate an executable file that comprises the trained and tested first machine learning model… wherein a graphical representation of the first machine learning model is highlighted indicating that the first machine learning model yields the higher performance compared to the second machine learning model” These additional elements amount to extra-solution activity of gathering data for use in the claimed process. As described in MPEP 2106.05(g), limitations that amount to merely adding insignificant extra-solution activity to the judicial exception(s) do not amount to significantly more than the exception(s) itself, and cannot integrate a judicial exception(s) into a practical application. For these reasons, the claim contains no additional elements which integrate the abstract idea into a practical application, and the claim is directed towards an abstract idea. Step 2B: Based on the determination in Step 2A of the analysis that the claims are directed toward a judicial exception, in must be determined if any claims contain any element or combination of elements sufficient to ensure that the claims amount to significantly more than the judicial exception (Step 2B). The claim further recites the additional elements of: “a first training data having a first data type; and a first test data having the first data type” “a second training data having a second data type; and a second test data having the second data type, wherein the first data type is different from the second data type,” “generate the set of machine learning models based on the hyperparameters, wherein: the number of machine learning models within the set of machine learning models is equal to the quantity value,” “each machine learning model is uniquely configured based on the settings identified by the hyperparameters,” “generating the set of machine learning models comprises: generate a first machine learning model according to a first set of hyperparameters comprising a first number of neural network layers, a first epoch value, and a first tolerance level” “generate a second machine learning model according to a second set of hyperparameters comprising a second number of neural network layers, a second epoch value, and a second tolerance level” “train a first machine learning model from the set of machine learning models using the first set of … values, wherein training the first machine learning model from the set of machine learning models configures the first machine learning model to” “train a second machine learning model from the set of machine learning models using the third set of… values, wherein training the second machine learning model configures the second machine learning model to: receive the third set of … values as an input,” “test the second machine learning model, by: inputting the fourth set of … values into the second machine learning model and obtaining the second classification value from the second machine learning model,” “recreate the first machine learning model using the information from the model comparison report” “execute the trained and tested machine learning model by executing the executable file” “generate, by the trained and tested first machine learning model, an output classification value based on a new input dataset” These additional elements amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use. As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). Thus, limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception do not amount to significantly more than the exception itself, and cannot integrate a judicial exception into a practical application. Additionally, the claim recites the additional elements: “a processor operably coupled to the memory and configured to,” “a second processor communicatively coupled with the processor and associated with the user device” The additional element is recited at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using a generic computer component (See MPEP 2106.05(f)). Additionally, the claim recites the additional elements: “receive a user input that identifies: a first machine learning model type; and hyperparameters comprising: a quantity value identifying a number of machine learning models for a set of machine learning models; and settings for each machine learning model within the set of machine learning models” “receive … values as an input” “testing the first machine learning model by: … inputting the second set of … values into the first machine learning model within the set of machine learning models” “obtaining a first classification value from the first machine learning model within the set of machine learning models,” “generate an executable file that comprises the trained and tested first machine learning model… wherein a graphical representation of the first machine learning model is highlighted indicating that the first machine learning model yields the higher performance compared to the second machine learning model” These additional elements amount to extra-solution activity of gathering data for use in the claimed process. As described in MPEP 2106.05(g), limitations that amount to merely adding insignificant extra-solution activity to the judicial exception(s) do not amount to significantly more than the exception(s) itself, and cannot integrate a judicial exception(s) into a practical application. In this instance, after considering all claim elements individually and as an ordered combination, it is determined that the claims do not include any additional elements that are sufficient to amount to significantly more than the judicial exception. Claim 2: With respect to claim 2, the claim depends upon independent claim 1, and the same analysis is incorporated herein. Step 2A, prong 2 The claim additionally recites the additional mathematical concepts of “parsing the first training data into a plurality of bytes.” This is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind, with perhaps the help of pencil and paper, and is thus an abstract idea. For example, the context of this claim encompasses a person performing mentally evaluating training data to parse the data. Further, the claim recites “transforming each byte into two hexadecimal values”. this element recites a mathematic concept. Claim 3: With respect to claim 3, the claim depends upon independent claim 1, and the same analysis is incorporated herein. Step 2A, prong 1: Additionally, the claim recites the mathematical concept of “converting the one or more keywords into the first set of hexadecimal values”. This is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind, with perhaps the help of pencil and paper, and is thus an abstract idea. For example, the context of this claim encompasses a person performing an evaluation to transform keywords into hexadecimal values. Step 2A, prong 2: Further, the claim recites the additional elements of: “wherein: the first training data comprises text”, which links the use of the judicial exception to a particular technological environment or field of use (see MPEP 2106.05(h)). “identifying one or more keywords within the text”, which amounts to extra-solution activity of gathering data for use in the claimed process. As described in MPEP 2106.05(g), limitations that amount to merely adding insignificant extra-solution activity to the judicial exception(s) do not amount to significantly more than the exception(s) itself, and cannot integrate a judicial exception(s) into a practical application. Thus, the claim contains no additional elements which integrate the abstract idea into a practical application, and the claim is directed towards an abstract idea. Step 2B: The limitations recited above does not amount to significantly more than the judicial exception. As stated above, the limitation describing the training data comprising text recites a field of use (see MPEP 21065.05(h)). As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). The courts have found that limitations directed to obtaining or transmitting information electronically, recited at ahigh level of generality, to be well-understood, routine, and conventional (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”, "electronic record keeping,” and "storing and retrieving information in memory"; buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096) and thus, do not add significantly more to the judicial exception. Thus, the limitations do not amount to significantly more than the exception itself, and cannot integrate the judicial exception into a practical application. Claim 4: With respect to claim 4, the claim depends upon independent claim 1, and the same analysis is incorporated herein. Step 2A, prong 1: The claim additionally recites the additional mathematical concept of “transforming the first training data into the first set of hexadecimal values” and “transforming the plurality of pixels to the first set of hexadecimal values0”. This is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind, with perhaps the help of pencil and paper, and is thus an abstract idea. For example, the context of this claim encompasses a person performing an evaluation to transform keywords into hexadecimal values. Step 2A, prong 2: Additionally, the claim recites the elements of: “wherein: the first training data comprises an image”, which links the use of the judicial exception to a particular technological environment or field of use (see MPEP 2106.05(h)). “identifying a plurality of pixels within the image that correspond with an object present in the image”, which amounts to extra-solution activity of gathering data for use in the claimed process. As described in MPEP 2106.05(g), limitations that amount to merely adding insignificant extra-solution activity to the judicial exception(s) do not amount to significantly more than the exception(s) itself, and cannot integrate a judicial exception(s) into a practical application. Thus, the claim contains no additional elements which integrate the abstract idea into a practical application, and the claim is directed towards an abstract idea. Step 2B: Further, the limitations recited above does not amount to significantly more than the judicial exception. As stated above, the limitation describing the training data comprising an image recites a field of use (see MPEP 21065.05(h)). As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). The courts have found that limitations directed to obtaining or transmitting information electronically, recited at ahigh level of generality, to be well-understood, routine, and conventional (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”, "electronic record keeping,” and "storing and retrieving information in memory"; buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096) and thus, do not add significantly more to the judicial exception. Thus, the limitations do not amount to significantly more than the exception itself, and cannot integrate the judicial exception into a practical application. Claim 5: With respect to claim 5, the claim depends upon independent claim 1, and the same analysis is incorporated herein. Step 2A, prong 1: The claim recites the additional mathematical concept of “convert classification values from each machine learning model into a third set of hexadecimal values”. Thus, the claim recites an abstract idea. Claim 6: With respect to claim 6, the claim depends upon independent claim 1, and the same analysis is incorporated herein. Step 2A, prong 1: The claim additionally recites the additional mental process step of “determine a level of accuracy for each machine learning model within the set of machine learning models based on classification values from each machine learning model”, which is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind, with perhaps the help of pencil and paper. For example, the context of this claim encompasses a person assigning a level of accuracy to a model based on received information. Thus, the claim recites an abstract idea. Step 2A, prong 2: The claim recites the additional element of “the model comparison report comprises the level of accuracy”, which links the use of the judicial exception to a particular technological environment or field of use (see MPEP 2106.05(h)). Step 2B: The limitation recited above does not amount to significantly more than the judicial exception. As stated above, the limitation describing the comparison report comprising accuracy recites a field of use (see MPEP 21065.05(h)). As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). Thus, the limitation does not amount to significantly more than the exception itself, and cannot integrate the judicial exception into a practical application. Claim 7: With respect to claim 7, the claim depends upon independent claim 1, and the same analysis is incorporated herein. Step 2A, prong 2: The claim recites the additional elements of: “wherein: the processor is further configured to determine a processing time for the first machine learning model within the set of machine learning models to determine the first classification value”, which amounts to extra-solution activity of gathering data for use in the claimed process. As described in MPEP 2106.05(g), limitations that amount to merely adding insignificant extra-solution activity to the judicial exception(s) do not amount to significantly more than the exception(s) itself, and cannot integrate a judicial exception(s) into a practical application. “the model comparison report comprises the processing time”, which links the use of the judicial exception to a particular technological environment or field of use (see MPEP 2106.05(h)). Thus, the claim contains no additional elements which integrate the abstract idea into a practical application, and the claim is directed towards an abstract idea. Step 2B: The limitation recited above does not amount to significantly more than the judicial exception. As stated above, the limitation describing the model comparison comprising processing time recites a field of use (see MPEP 21065.05(h)). As explained by the Supreme Court, a claim directed to a judicial exception cannot be made eligible "simply by having the applicant acquiesce to limiting the reach of the patent for the formula to a particular technological use." Diamond v. Diehr, 450 U.S. 175, 192 n.14, 209 USPQ 1, 10 n. 14 (1981). The courts have found that limitations directed to obtaining or transmitting information electronically, recited at ahigh level of generality, to be well-understood, routine, and conventional (see MPEP 2106.05(d)(II), “receiving or transmitting data over a network”, "electronic record keeping,” and "storing and retrieving information in memory"; buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096) and thus, do not add significantly more to the judicial exception. Thus, the limitations do not amount to significantly more than the exception itself, and cannot integrate the judicial exception into a practical application. Claims 8-14 are substantially similar to claims 1-7 and therefore, they are also rejected for the reasons set forth in Steps 2A and 2B of claims 1-7, respectively. Claims 15-19 and 20 are substantially similar to claims 1-5 and 7, respectively, and therefore, in addition to failing to recite one of the statutory categories (Step 1), they are also rejected for the reasons set forth in Steps 2A and 2B of claims 1-5 and 7, respectively. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-3, 5-10, 12-17, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Sturlaugson et al. (US 20160358099, published 8 December 2016, hereafter Sturlaugson) and further in view of Schmidtler et al. (US 20180013772, published 11 January 2018, hereafter Schmidtler) and further in view of Srinivasaraghavan et al. (US 11070881, patented 20 July 2021, hereafter Srinivasaraghavan) and further in view of Velagapudi et al. (US 2021/0232980, filed 28 January 2020, hereafter Velagapudi) and further in view of Mopur et al. (US 2020/0151619, filed 9 November 2018, hereafter Mopur). Regarding claim 1, Sturlaugson teaches a machine learning model testing device comprising: a memory operable to store (Figure 1, item 16): a first training data having a first data type (paragraph 0036); a first test data having the first data type (Paragraph 0036, Lines 1-5, “Experiment module 30 may be configured, optionally for each machine learning model 32 independently, to divide the dataset into a training dataset (a subset of the dataset) and an evaluation dataset (another subset of the dataset)”); a second training data (paragraph 0036) having the second data type (paragraph 0034) a second test data (paragraph 0036: Here, first and second machine learning models are trained and evaluated. This includes dividing the dataset into a training dataset and an evaluation dataset for each model (“The same training dataset and evaluation dataset may be used for one or more, optionally all of the machine learning models;” emphasis added)) and a processor operably coupled to the memory (Figure 1, item 10), and configured to: receive a user input that identifies (Paragraph [0022], Lines 16-18, “Thus, the selection of machine learning models 32 for the data input module 20 may be a (user) selection of machine learning algorithms and their associated parameter(s)”): a first machine learning model type and hyperparameters comprising (paragraph 0034): a quantity value identifying a number of machine learning models for a set of machine learning models (paragraph 0034) and settings for each machine learning model within the set of machine learning models (Sturlaugson: Paragraph [0034], Lines 4-23, “Automatic and/or autonomous design of experiments may include determining the order of machine learning models 32 to test and/or which machine learning models 32 to test. For example, the selection of machine learning models 32 received by the data input module 20 may include specific machine learning algorithms and a range and/or a set of one or more associated parameters to test … the experiment module 30 may generate a machine learning model 32 for each unique combination of parameters specified by the selection … The experiment module 30 may interpret this selection as at least four machine learning models); generate the set of machine learning models based on the hyperparameters, wherein: the number of machine learning models within the set of machine learning models is equal to the quantity value (paragraph 0040) and each machine learning model is uniquely configured based on the settings identified by the hyperparameters (Sturlaugson: Paragraph [0040], Lines 1-3, “Experiment module 30 is configured to train each of the machine learning models 32 using supervised learning to produce a trained model for each machine learning model”); generating the set of machine learning models comprises: generating a first machine learning model according to a firs set of hyperparameters (paragraph 0017), a first epoch value (paragraph 0022), and a first tolerance level (Figure 3; paragraph 0022: Here, the first machine learning model is trained based on the data set) generate a second machine learning model according to a second set of hyperparameters (paragraph 0017), a second epoch value (paragraph 0022), and a second tolerance level (Figure 3; paragraph 0022: Here, the second machine learning model is trained based on the data set) train a first machine learning model from the set of machine learning models using the first set of values, wherein training the first machine learning model from the set of machine learning models configures the first machine learning model to: receive the first set of values as an input (paragraph 0036: Here, a plurality of learning models, a first machine learning model and a second machine learning model, are disclosed. Each of these models is trained using a training dataset and tested using an evaluation dataset) output a first classification value based on the first set of values (paragraph 0043) test the first machine learning model by: inputting the second set of values into the first machine learning model within the set of machine learning models (paragraph 0036: Here, a plurality of learning models, a first machine learning model and a second machine learning model, are disclosed. Each of these models is trained using a training dataset and tested using an evaluation dataset) obtaining the first classification value from the first machine learning model within the set of machine learning models (paragraph 0043) determining first performance metrics for the first machine learning model based at least in part on a performance of the processor while the first machine learning model generates the first classification value (Sturlaugson: Paragraph [0042], Lines 11-16, “the indicator, value, and/or result may be related to computational efficiency, memory required, and/or execution speed. The performance result for each machine learning model 32 may include at least one indicator, value, and/or result of the same type (e.g., all performance results include an accuracy)”); train a second machine learning model from the set of machine learning models using the third set of values, wherein training the second machine learning model configures the second machine learning model to (paragraph 0036: Here, a plurality of learning models, a first machine learning model and a second machine learning model, are disclosed. Each of these models is trained using a training dataset and tested using an evaluation dataset): receive the third set of values as an input (paragraph 0036: Here, a plurality of learning models, a first machine learning model and a second machine learning model, are disclosed. Each of these models is trained using a training dataset and tested using an evaluation dataset. In this instance, the training dataset of the first model is a first data set; the evaluation dataset of the first model is the second dataset; the training dataset of the second model is the third dataset; the evaluation dataset of the second model is the fourth dataset) output a second classification value based on the third set of values test the second machine learning model by: inputting the fourth set of values into the second machine learning model (paragraph 0036: Here, a plurality of learning models, a first machine learning model and a second machine learning model, are disclosed. Each of these models is trained using a training dataset and tested using an evaluation dataset) obtaining the second classification value from the second machine learning model (paragraph 0043) determining second performance metrics for the second machine learning model based at least in part on a performance of the while the second machine learning model generates the second classification value, wherein testing each of the first machine learning model and the second machine learning model allows a comparison among the first machine learning model and the second machine learning model (paragraphs 0040-0042 and 0056-0057) compare the first performance metrics for the first machine learning model with the second performance metrics for the second machine learning model (paragraph 0045: Here, the performance results of each model are aggregated and/or accumulated to allow for comparison of the plurality of models) determine, based at least in part upon the comparison, that the first machine learning model yields a higher performance compared to the second machine learning mode (paragraph 0045: Here, the performance results of each model are aggregated and/or accumulated to allow for comparison of the plurality of models) generating an executable file that comprises the trained and tested first machine learning model ((Figure 4, item 124; paragraphs 0040 and 0056-0057: Here, an executable machine learning model is trained and tested for use) generate a model comparison report that comprises the first performance metrics for the first machine learning model and the second performance metrics for the second machine learning model, wherein a graphical representation of the first machine learning model is compared to a second machine learning model (Figure 4, item 124; paragraphs 0040 and 0056-0057: Here, the performance/evaluation of each trained model is compared to generate performance statistics to be displayed to a user (paragraph 0060)) and output the model comparison report to a user device (paragraph [0060], Lines 3-6, “Presenting 110 may include presenting the performance results for all of the machine learning models in a unified format to facilitate comparison of the machine learning models”) execute the trained and tested first machine learning model by executing the executable file (paragraphs 0053-0054: Here, the machine learning model is executed on the test data to determine the performance of the model) generate, by the trained and tested first machine learning model, an output classification value based on a new input dataset (paragraphs 0053-0054: Here, the evaluated machine learning model data is presented (paragraph 0060)) Sturlaugson fails to teach: having the second data type, wherein the first data type is different form the second data type transform data set of hexadecimal values, wherein each hexadecimal value is a base-16 numerical value each model comprising a number of neural network layers the first performance metrics comprising a first level of accuracy, a first number of features used, and a first processing time associated with the first machine learning model the second performance metrics comprising a second level of accuracy, a second number of features used, and a second processing time associated with the second machine learning model generating an executable file that comprises the trained and tested first machine learning model information to recreate the first machine learning model, the information comprising weights and biases of neural network layers of the first machine learning model the first machine learning model is highlighted indicating that the first machine learning model yields the higher performance compared to the second machine learning model a second processor communicatively coupled with the processor and associated with the user device recreate the first machine learning model using the information from the model comparison report However, Schmidtler, which is analogous to the claimed invention because it is directed toward processing hexadecimal data, discloses: convert the training data into a first set of hexadecimal values, wherein each hexadecimal value is a base-16 numerical value (Schmidtler: Paragraph [0029], Lines 2-6, “In aspects, extracted static data may be used to generate a feature vector. Generating a feature vector may comprise, for example, grouping static data fields and/or values, labeling identified anomalies, converting data into hex representations”); train the set of machine learning models using the first set of hexadecimal values, wherein training the set of machine learning models configures each machine learning model to: receive hexadecimal values as an input and output a classification value based on the input hexadecimal values (Schmidtler: Paragraph [0023], Lines 7-9, “Modeling engine 206 may perform data analysis and pattern recognition on the feature vectors to build and/or train one or more probabilistic models”); convert the test data into a second set of hexadecimal values; input the second set of hexadecimal values into each machine learning model within the set of machine learning models (Schmidtler: Paragraph [0023], Lines 9-13, “The probabilistic models may then be used determine the security status of a downloaded or downloading file. For example, portions of a downloading file and/or one or more corresponding feature vectors may be provided directly to a probabilistic model”, Once trained, the probabilistic models can be validated on portions of a file represented by feature vectors, in the form of hexadecimal values.); obtain a classification value from each machine learning model within the set of machine learning models, wherein the classification value identifies an input to a machine learning model (Schmidtler: Paragraph [0035], Lines 45-47, “a classification may be subdivided into a three-class classification problem defined by malicious files, potentially unwanted files/applications and benign files”). Sturlaugson and Schmidtler are considered to be of the same field of endeavor as both are pertinent to data analysis and identification/classification through supervised learning. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Sturlaugson to incorporate the teachings of Schmidtler by converting training and testing data into hexadecimal values before then using them as input into the determined machine learning models of Sturlaugson for classification. Doing so would allow the system to take the hexadecimal representations (referred to as feature vectors in Schmidtler) and use them to classify and score potentially incongruent sections of input data (Schmidtler: Paragraph [0021]). Additionally, Srinivasaraghavan, which is analogous to the claimed invention because it is directed toward training two different models using two different types of data, discloses: a first training data having a first data type (Figure 1A; column 3, lines 13-27) a second training data having a second data type (Figure 1A; column 3, lines 40-54) wherein the first data type is different form the second data type (column 3, lines 13-27 and 40-54: Here, a first model is trained using a first type of metadata associated with content items. Additionally, a second model is trained based on one or more second types of metadata, that are different from the first types of metadata) It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Srinivasaraghavan with Sturlaugson-Schmidtler, with a reasonable expectation of success, as it would have allowed for comparing models (Sturlaugson) trained on different types of data (Srinivasaraghavan). This would have allowed for identifying the best model based for performing classification (Srinivasaraghvan: column 3, line 55- column 4, line 12). Further, the examiner takes official notice that it was notoriously well-known in the art at the time of the applicant’s effective filing date that training a machine learning model may include training a model including a plurality of neural network layers. It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined the well known with Srinivasaraghvan, with a reasonable expectation of success, as it would have allowed for training a model using conventional techniques. Additionally, Velagapudi, which is analogous to the claimed invention because it is directed toward comparing machine learning models, discloses: the first machine learning model is highlighted indicating that the first machine learning model yields the higher performance compared to the second machine learning model (Figure 11; paragraphs 0037 and 0052-0053: Here, potential differences between multiple instances of a machine learning model are analyzed and differences highlighted. This allows a user to identify the higher performance machine learning model) a second processor communicatively coupled with the processor and associated with the user device (Figure 9, items 912a and 912n) information comprising weights of a neural network layer of the machine learning model (paragraph 0022: Here, weights are associated with features of the machine learning model. These weights associated with the original model and the current model may be compared (paragraph 0039) It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Velagapudi with Srinivasaraghvan, with a reasonable expectation of success, as it would have allowed for comparing multiple different models and displaying the results to a user to determine whether to accept/rejects changes within the model (Velagapudi: paragraphs 0037 and 0052-0053). Finally, Mopur, which is analogous to the claimed invention because it is directed toward determining drift and improving training based upon the drift, discloses: the first performance metrics comprising a first level of accuracy, a first number of features used, and a first processing time associated with the first machine learning model (paragraph 0027: Here, a metadata package representing the state of the deployed machine learning model is aggregated. This includes prediction requests, outputs corresponding to prediction requests, time stamps (processing time associated with the model), prediction latency, ML model version, and edge device identification. These constitutes a number of features used. This metadata may be used to recreate and implement the ML model (paragraphs 0027 and 0029) and compare the performance of the model to other models. Additionally, drift of the model (level of accuracy) may be determined (paragraph 0031)) the second performance metrics comprising a second level of accuracy, a second number of features used, and a second processing time associated with the second machine learning model (paragraph 0027: Here, a metadata package representing the state of the deployed machine learning model is aggregated. This includes prediction requests, outputs corresponding to prediction requests, time stamps (processing time associated with the model), prediction latency, ML model version, and edge device identification. These constitutes a number of features used. This metadata may be used to recreate and implement the ML model (paragraphs 0027 and 0029) and compare the performance of the model to other models. Additionally, drift of the model (level of accuracy) may be determined (paragraph 0031)) information to recreate the first machine learning model, the information comprising biases of neural network layers of the first machine learning model (Figure 4; paragraphs 0027 and 0038: Here, the ML model may be recreated for by unpacking and implementing the metadata associated with the ML (paragraph 0027). Further, based upon implementing the ML model, biases, such as drift, may be identified and calculated to improve training of the model to alleviate the biases) recreate the first machine learning model using the information from the model comparison report (paragraphs 0027 and 0029: Here, recreation of a machine learning model may be recreated based upon the associated metadata to evaluation the model at a given time) It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Mopur with Sturlaugson-Schmidtler-Srinivasaraghavan-Velagapudi, with a reasonable expectation of success, as it would have allowed for identifying drift and training models to address the specific type of identified drift (Mopur: paragraphs 0021-0022). Regarding claim 2, Sturlaugson, Schmidtler, Srinivasaraghavan, Velagapudi, and Mopur disclose the limitations similar to those in claim 1, and the same rejection is incorporated herein. Schmidtler, further discloses wherein transforming the first training data into the first set of hexadecimal values comprises: parsing the first training data into a plurality of bytes and transforming each byte into two hexadecimal values (Schmidtler: Paragraph [0029], Lines 2-6, “In aspects, extracted static data may be used to generate a feature vector. Generating a feature vector may comprise, for example, grouping static data fields and/or values, labeling identified anomalies, converting data into hex representations”; See Table 1, column titled String/bytes; The data used for creating hexadecimal feature values can be in the form of text strings/bytes, which are then converted into hexadecimal values, of which there are two for each byte.). Sturlaugson and Schmidtler are considered to be of the same field of endeavor as both are pertinent to data analysis and identification/classification through supervised learning. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Sturlaugson to incorporate the teachings of Schmidtler by converting the training data for the multiple challenger models into hexadecimal format by first parsing it into bytes. Doing so would allow for the preprocessing of input data to create feature vectors, which can then be used to classify grouped portions of input data (Schmidtler: Paragraph [0021]). Regarding claim 3, Sturlaugson, Schmidtler, Srinivasaraghavan, Velagapudi, and Mopur disclose the limitations similar to those in claim 1, and the same rejection is incorporated herein. Schmidtler further discloses wherein the first training data comprises text (Schmidtler: Paragraph [0021], Lines 3-7, “In aspects, feature vector engine 204 may use extracted static data points from a file to construct one or more feature vectors. The feature vector may comprise static data from multiple categories (e.g., numerical values, nominal values, string values, Boolean values, etc.)”); transforming the first training data into the first set of hexadecimal values comprises: identifying one or more keywords within the text and transforming the one or more keywords into the first set of hexadecimal values (Schmidtler: Paragraph [0021], Lines 7-10, “In examples, constructing a feature vector may comprise, for example, grouping values, labeling identified anomalies in the file, converting data into hex representations”). Sturlaugson and Schmidtler are considered to be of the same field of endeavor as both are pertinent to data analysis and identification/classification through supervised learning. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Sturlaugson to incorporate the teachings of Schmidtler by, upon receiving text input data, converting keywords within the text into hexadecimal format. Doing so would allow the system to preprocess selected segments of input text, which could then be used to classify the selected segments, and then the whole text, as potentially unwanted or malicious (Schmidtler: Paragraph [0021]). Regarding claim 5, Sturlaugson, Schmidtler, Srinivasaraghavan, Velagapudi, and Mopur disclose the limitations similar to those in claim 1, and the same rejection is incorporated herein. Schmidtler, further discloses wherein the processor is further configured to transform classification values from each machine learning model into a fifth set of hexadecimal values (Schmidtler: Paragraphs [0035-0036], “In aspects, scores or other values may be determined and assigned to a feature vector or one or more data points in a feature vector ... In aspects, the combined information and scored for each feature may be used to accurately determine the security classification (e.g., malicious, potentially unwanted, benign, etc.) of a file. As an example, the resulting feature vector for the four data points associated with the above PE file is shown below: Table-US-00002”, The classification/score/label of the feature vectors can be combined into a feature vector in the form of hexadecimal values with the original data for future use.). Sturlaugson and Schmidtler are considered to be of the same field of endeavor as both are pertinent to data analysis and identification/classification through supervised learning. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Sturlaugson to incorporate the teachings of Schmidtler by converting output from the candidate models into hexadecimal format. Doing so would allow the system to perform further classification/identification on a larger dataset/input (Schmidtler: Paragraph [0036], Lines 20-23, “In aspects, the combined information and scored for each feature may be used to accurately determine the security classification (e.g., malicious, potentially unwanted, benign, etc.) of a file”). Regarding claim 6, Sturlaugson, Schmidtler, Srinivasaraghavan, Velagapudi, and Mopur disclose the limitations similar to those in claim 1, and the same rejection is incorporated herein. Sturlaugson further discloses wherein: the processor is further configured to determine a level of accuracy for the first machine learning model within the set of machine learning models based on classification values from each machine learning model (Sturlaugson: Paragraph [0044], Lines 1-3, “For two-class classification schemes, accuracy is the total number of true positives and true negatives divided by the total population”); and the model comparison report comprises the level of accuracy (Sturlaugson: Paragraph [0045], Lines 13-17, “The performance comparison statistics may include at least one indicator, value, and/or result of the same type for each machine learning model 32 (e.g., the performance comparison statistics include an accuracy for each machine learning model 32)”). Regarding claim 7, Sturlaugson further teaches wherein: the processor is further configured to determine a processing time for each machine learning model within the set of machine learning models to determine the first classification value; and the model comparison report comprises the processing time (Sturlaugson: Paragraph [0042], Lines 11-16, “the indicator, value, and/or result may be related to computational efficiency, memory required, and/or execution speed. The performance result for each machine learning model 32 may include at least one indicator, value, and/or result of the same type (e.g., all performance results include an accuracy)”). Claims 8-10 and 12-14 recite the same limitations as claims 1-3 and 5-7. Regarding the additional limitations, Sturlaugson teaches a machine learning model testing method (Sturlaugson: Paragraph [0047], Lines 1-3, “FIG. 3 schematically illustrates methods 100 to test machine learning algorithms with data such as time-series data”). Claims 8-10 and 12-14 are thus rejected for reasons set forth in the rejections of claims 1-3 and 5-7. Claims 15-17, 19, and 20 recite the same limitations as claims 1-3, 5, and 7, respectively. Regarding the additional limitations, Sturlaugson teaches a computer program product comprising executable instructions stored in a non-transitory computer-readable medium that when executed by a processor causes the processor to implement a machine learning model testing method (Sturlaugson: Paragraph [0013], Lines 1-7, “As illustrated in FIG. 1, a machine learning system 10 is a computerized system that includes a processing unit 12 operatively coupled to a storage unit 14. The processing unit 12 is one or more devices configured to execute instructions for software and/or firmware. The processing unit 12 may include one or more computer processors and may include a distributed group of computer processors”). Claims 15-17, 19, and 20 are thus rejected for reasons set forth in the rejections of claims 1-3, 5, and 7. Claims 4, 11, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Sturlaugson, Schmidtler, Srinivasaraghavan, Velagapudi, and Mopur and further in view of Dong (Foreign Patent Application Publication No. CN 113139415 A, see attached English translation of description). Regarding claim 4, Sturlaugson, Schmidtler, Srinivasaraghavan, Velagapudi, and Mopur disclose the limitations similar to those in claim 1, and the same rejection is incorporated herein. Sturlaugson fails to specifically disclose wherein: the first training data comprises an image and transforming the first training data into the first set of hexadecimal values comprises: identifying a plurality of pixels within the image that correspond with an object present in the image; and transforming the plurality of pixels to the first set of hexadecimal values. Dong, however, teaches wherein: the first training data comprises an image and transforming the first training data into the first set of hexadecimal values comprises: identifying a plurality of pixels within the image that correspond with an object present in the image; and transforming the plurality of pixels to the first set of hexadecimal values (Dong: Paragraph [n0094], “in order to further determine whether the first category key frames and the second category key frames are “real” key frames, the computer device may convert the first category key frames and the second category key frames into hash codes respectively. Among them, the process of converting a frame image into a hash code may include: assuming that the size of the frame image is x*y, its pixel matrix is stored in an n*n array; then the frame image is converted to a z*z size (z can be determined by the optimal image size that the hash algorithm can handle), and then the converted frame image is grayed, and the pixel difference of each pixel before and after graying is calculated, and the pixel difference is composed of an n'*n' array; and the pixel difference is usually in binary form, then the binary pixel difference can be converted into hexadecimal, which is the corresponding hash code”). Dong, Sturlaugson and Schmidtler are considered to be of the same field of endeavor as they are pertinent to data analysis and identification/classification through supervised learning. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Sturlaugson and Schmidtler to incorporate the teachings of Dong by, upon receiving image input data, converting certain pixels from the input data into hexadecimal format. Doing so would allow the system to uniformly abstract image data for ease of comparison and classification (Dong: Paragraphs [n0094-n0097], “Specifically, in order to further determine whether the first category key frames and the second category key frames are “real” key frames, the computer device may convert the first category key frames and the second category key frames into hash codes respectively … calculating the confidence level of each hash code corresponding to each reference hash code in a preset hash code database”). Claim 11 recites the same limitations as claim 4. Regarding the additional limitations, Sturlaugson teaches a machine learning model testing method (Sturlaugson: Paragraph [0047], Lines 1-3, “FIG. 3 schematically illustrates methods 100 to test machine learning algorithms with data such as time-series data”). Claim 11 is thus rejected for reasons set forth in the rejection of claim 4. Claim 18 recites the same limitations as claim 4. Regarding the additional limitations, Sturlaugson teaches a computer program product comprising executable instructions stored in a non-transitory computer-readable medium that when executed by a processor causes the processor to implement a machine learning model testing method (Sturlaugson: Paragraph [0013], Lines 1-7, “As illustrated in FIG. 1, a machine learning system 10 is a computerized system that includes a processing unit 12 operatively coupled to a storage unit 14. The processing unit 12 is one or more devices configured to execute instructions for software and/or firmware. The processing unit 12 may include one or more computer processors and may include a distributed group of computer processors”). Claim 18 is thus rejected for reasons set forth in the rejection of claim 4. Response to Arguments Applicant’s arguments with respect to the rejection of claims under 35 USC 103 have been fully considered and are persuasive (pages 26-28). Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Sturlaugson, Schmidtler, Srinivasaraghavan, Velagapudi, and Mopur. Additionally, the factual assertion set forth in the Office Action dated 29 July 2025 has not been traversed. According to MPEP 2144.03 (C) the official notice statement is taken to be admitted prior art because the appellant failed to traverse the examiner’s assertion. Applicant's arguments with respect to the rejection of claims under 35 USC 101 have been fully considered but they are not persuasive. Under Step 2A, Prong One, the applicant argues that “generate a… machine learning model according to a… set of hyperparameters,” “train a… machine learning model,” and “test the… machine learning model” are inherently machine operations and a human mind is not equipped to perform these operations (pages 17-18). However, the examiner notes that these limitations are not considered under Step 2A, Prong One. Instead, the examiner considers these limitations under Step 2A, Prong Two and Step 2B. For this reason, this argument is moot. Under Step 2A, Prong One, the applicant further argues that the limitations reciting “information to recreate the first machine learning model, the information comprising weights and biases of neural network layers of the first machine learning model” and “a second process… configured to: recreate the first machine learning model using the information from the model comparison report” are specific computer-implemented operations and beyond human capabilities (page 18). However, the examiner notes that these limitations are not considered under Step 2A, Prong One. Instead, the examiner considers these limitations under Step 2A, Prong Two and Step 2B. For this reason, this argument is moot. Under Step 2A, Prong Two, the applicant argues that the claims retie a technical solution to a technical problem (page 20). Specifically, the applicant argues that the claims disclose an improvement to the functioning of a software testing device by improving the device’s ability to efficiently configure and test multiple types of machine learning models regardless of the types of input formats that they are natively configured to use (page 21). The applicant argues that “testing each of the first machine learning models and the second machine learning model in hexadecimal format allows a comparison among the first machine learning model and the second machine learning model that would otherwise require different data types (page 21).” However, the examiner notes that this appears to be directed to merely claiming the idea of the solution our outcome (comparison among the first machine learning model and the second machine learning model that would otherwise require different data types) instead of claiming a particular solution to a problem or a particular way to achieve the desired outcome. In instances where the claims recite only the idea of a solution or outcome, the claims fail to integrate a judicial exception into a practical application in Step 2A, Prong Two (MPEP 2106.05(a)). For this reason, this argument is not persuasive. The applicant further argues that recreating and deploying a machine learning model disclose a practical application that improves the technical field of software development and testing (pages 23-24). However, the examiner notes that this appears to be directed to merely claiming the idea of the solution our outcome (improves the technical field of software development and testing) instead of claiming a particular solution to a problem or a particular way to achieve the desired outcome. It is unclear how recreating and deploying a machine learning model improves the technical field of software development and testing. In instances where the claims recite only the idea of a solution or outcome, the claims fail to integrate a judicial exception into a practical application in Step 2A, Prong Two (MPEP 2106.05(a)). For this reason, this argument is not persuasive. Under Step 2B, the applicant argues that the claims recite significantly more than the exception because the claimed combination, when viewed as an ordered combination, are not a well-understood, routine, or conventional implementation of the alleged abstract idea (pages 24-26). The examiner notes that under Step 2B, it must be determined if any claims contain any element or combination of elements sufficient to ensure that the claims amount to significantly more than the judicial exception (Step 2B). In this instance, after considering all claim elements individually and as an ordered combination, it is determined that the claims do not include any additional elements that are sufficient to amount to significantly more than the judicial exception. For this reason, this argument is not persuasive. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Alelyani (Detection and Evaluation of Machine Learning Bias, published 7 July 2021): Discloses detecting and evaluating bias in machine learning models using a wrapper technique and swapping potentially biased attributes to evaluate divergence (Abstract) Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KYLE R STORK whose telephone number is (571)272-4130. The examiner can normally be reached 8am - 2pm; 4pm - 6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached at 571/272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KYLE R STORK/Primary Examiner, Art Unit 2128
Read full office action

Prosecution Timeline

Jul 28, 2021
Application Filed
Sep 25, 2024
Non-Final Rejection — §101, §103
Nov 29, 2024
Interview Requested
Dec 12, 2024
Examiner Interview Summary
Dec 17, 2024
Response Filed
Apr 03, 2025
Final Rejection — §101, §103
May 20, 2025
Applicant Interview (Telephonic)
May 22, 2025
Examiner Interview Summary
Jun 06, 2025
Request for Continued Examination
Jun 10, 2025
Response after Non-Final Action
Jul 25, 2025
Non-Final Rejection — §101, §103
Aug 06, 2025
Applicant Interview (Telephonic)
Aug 09, 2025
Examiner Interview Summary
Oct 28, 2025
Response Filed
Feb 01, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585935
EXECUTION BEHAVIOR ANALYSIS TEXT-BASED ENSEMBLE MALWARE DETECTOR
2y 5m to grant Granted Mar 24, 2026
Patent 12585937
SYSTEMS AND METHODS FOR DEEP LEARNING ENHANCED GARBAGE COLLECTION
2y 5m to grant Granted Mar 24, 2026
Patent 12585869
RECOMMENDATION PLATFORM FOR SKILL DEVELOPMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12579454
PROVIDING EXPLAINABLE MACHINE LEARNING MODEL RESULTS USING DISTRIBUTED LEDGERS
2y 5m to grant Granted Mar 17, 2026
Patent 12579412
SPIKE NEURAL NETWORK CIRCUIT INCLUDING SELF-CORRECTING CONTROL CIRCUIT AND METHOD OF OPERATION THEREOF
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
64%
Grant Probability
92%
With Interview (+28.3%)
4y 0m
Median Time to Grant
High
PTA Risk
Based on 865 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month