Prosecution Insights
Last updated: April 19, 2026
Application No. 18/206,452

TUNING HYPERPARAMETERS FOR POSTPROCESSING OUTPUT OF MACHINE LEARNING MODELS

Non-Final OA §103
Filed
Jun 06, 2023
Examiner
SPRATT, BEAU D
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
LANDING AI
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
342 granted / 432 resolved
+24.2% vs TC avg
Strong +27% interview lift
Without
With
+26.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
37 currently pending
Career history
469
Total Applications
across all art units

Statute-Specific Performance

§101
12.2%
-27.8% vs TC avg
§103
63.7%
+23.7% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
5.4%
-34.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 432 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are presented in the case. Claim Objections Claims 1, 8 and 15 are objected to because of the following informalities: Claim 1, line 8 recites the phrase “determining a fitness metric value for each of the population of vectors” which should be “determining a fitness metric value for each vector in the population of vectors” For the informalities above and wherever else they may occur appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5. 8-12 and 15-19 are rejected under 35 U.S.C. 103 as being unpatentable over Cheng. et al. (US 8768868 B1 hereinafter Cheng) in view of Madineni et al. (US 20230237786 A1 hereinafter Madineni) and Koch et al. (US 10360517 B2 hereinafter Koch) As to independent claim 1, Cheng teaches a computer-implemented method for tuning postprocessing hyperparameters of a machine learning model, the computer-implemented method comprising: [tuning threshold parameters with PSO Col. 4 ln. 46-50"] initializing a population of vectors, each vector representing values of postprocessing hyperparameters for processing output score of the machine learning model; [maintains a population of solutions and parameters (threshold-offsets) Col. 5 ln. 15-39 " tuning the threshold-offsets with PSO"…"A population of potential solutions is maintained as the positions of a set of particles in a solution space"], [output score (responses r =0, >0) Col. 6 ln. 5-10 " A vote is cast by comparing the response to a predetermined value, such as zero. As a non-limiting example, if the response is equal to or greater than zero, it is one class; if less than zero, the other class."] repeating a plurality of times: adding one or more vectors to the population of vectors, the one or more vectors generated from existing vectors of the population; [new vectors from existing (changes position/velocity to best) col. 5 ln. 25-39 " keeping track of the current position x.sub.i, and the best solution (p.sub.i) it has observed so far. A global best position variable (p.sub.g) stores the best location among all particles. The velocity of each particle is then changed "] determining a fitness metric value for each of the population of vectors, comprising: [outputs a ROC/AUC to measure fitness Col. 5 ln. 51-58 "optimize objective functions that utilize the geometric or arithmetic mean of the areas under the ROC (AUC) of each class or that of the true-positive (TP) rate given a particular false-positive rate"] executing the machine learning model for an input data to obtain an output score, the input data associated with a ground truth label; [executes a classifier model and outputs a score (responses r =0, >0) Col. 6 ln. 1-10 " an input image (or image patch) 102 is received. A pair-wise multi-class classifier 104 first generates M(M-1)/2 responses r 106"…"A vote is cast by comparing the response to a predetermined value, such as zero. As a non-limiting example, if the response is equal to or greater than zero, it is one class; if less than zero, the other class."] processing the output score using postprocessing hyperparameters represented by the vector to obtain a final output result; and [applies threshold to response R to produce final output Col. 6 ln. 1-27 "The set of threshold-offsets x is applied 110 to the responses r 106 to obtain r' 112. Finally, a majority vote 114 is performed based on the responses to determine which class the input features are predicted to belong. The output y 116 represents the class prediction for each input feature"] selecting a vector from the population of vectors based on the fitness metric values; and [uses threshold metric to select the best position to optimize Col. 5 ln. 16-50 "Each particle is assigned a velocity vector and the particles then cooperatively explore the solution space in search of an objective function optima.", "A global best position variable (p.sub.g) stores the best location among all particles. The velocity of each particle is then changed towards p.sub.i and p.sub.g in a dynamical way"] Cheng does not specifically teach determining the fitness metric value based on a difference between the final output result and the ground truth label; However, Madineni teaches determining the fitness metric value based on a difference between the final output result and the ground truth label; [validates output (against valid labels in test image) for error percentage (fitness) ¶25-27 "the final output of the classification, which is compared against the associated labels."…"trained neural network 140 passes the validation step (e.g., satisfactory percentage of errors for a particular runtime scenario), the trained neural network 140 and post-processing logic 146 may then be implemented in an application (e.g., a neural network inference application) in a run time environment to classify objects detected in an image"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the classification process disclosed by Cheng by incorporating the determining the fitness metric value based on a difference between the final output result and the ground truth label by Madineni because both techniques address the same field of machine learning and by incorporating Madineni into Cheng improves classification solutions that are easily adaptable to multiple classification scenarios, including known hierarchies having two or more levels, and that provide performance or other advantages over conventional systems [Madineni ¶4] Cheng and Madineni do not specifically teach removing one or more vectors from the population based on fitness metric values of the one or more vectors; and using values of postprocessing hyperparameters from the vector selected for postprocessing of output of the machine learning model. However, Koch teaches removing one or more vectors from the population based on fitness metric values of the one or more vectors; [removes hyperparameter configurations Col. 34. 30-35 from configuration list (population of vectors) iteratively Col 32 ln. 4-22 "Grid search methods may be used in a first iteration to define the first set of hyperparameter configurations that sample the search space. The initial configuration list 322 is also called a “population”."] using values of postprocessing hyperparameters from the vector selected for postprocessing of output of the machine learning model. [final hyperparameters are used (selected) based on best Col 35 ln. 15-18 "a final hyperparameter configuration is selected based on the hyperparameter configuration that generated the best or lowest objective function value."] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the classification process disclosed by Cheng and Madineni by incorporating the removing one or more vectors from the population based on fitness metric values of the one or more vectors; and using values of postprocessing hyperparameters from the vector selected for postprocessing of output of the machine learning model disclosed by Koch because all techniques address the same field of machine learning and by incorporating Koch into Cheng and Madineni better identifies ideal hyperparameters with less manual effort [Koch Col. 48-61] As to dependent claim 2, the rejection of claim 1 is incorporated, Cheng, Madineni and Koch further teach wherein the machine learning model classifies an input data to one of a plurality of categories, wherein the output score of the machine learning model is mapped to a category based on a plurality of thresholds, each threshold representing a postprocessing hyperparameter. [Cheng multi-class classifier with classes (model with multiple categories) and responses (scores) are compared to a value (threshold) Col. 1-2 ln. 53-5 " Each classification response in the set of classification responses r' is compared to a predetermined value to classify the set of input features as belonging to an object class."] As to dependent claim 3, the rejection of claim 2 is incorporated, Cheng, Madineni and Koch further teach wherein each vector of the population of vectors represents a plurality of values, each value corresponding to a threshold from the plurality of thresholds. [Cheng velocity and position vectors, values and sets of thresholds Col. 5 ln. 16-50 ln. 53-5 "multiple thresholds need to be tuned in order to obtain the thresholds that achieve the desired operating points. The invention described herein solves this problem by tuning the threshold-offsets with PSO"…"A population of potential solutions is maintained as the positions of a set of particles in a solution space, where each dimension represents one solution component"] As to dependent claim 4, the rejection of claim 1 is incorporated, Cheng, Madineni and Koch further teach wherein the fitness metric value is determined as an aggregate measure of difference between each final output result and corresponding ground truth label for a plurality of samples, each sample representing an input data with a ground truth label. [Cheng AUC/TP/ROC are aggregates (sums/products) per class to measure performance Col. 5 ln. 51-60 "A population of potential solutions is maintained as the positions of a set of particles in a solution space, where each dimension represents one solution component"] As to dependent claim 5, the rejection of claim 1 is incorporated, Cheng, Madineni and Koch further teach wherein generating one or more vectors from existing vectors of the population comprises, identifying a vector from the population of vectors and modifying one or more elements of the vector to obtain a new vector. [Cheng optimize by changing velocity vector Col. 5 ln. 16-50 "Each particle is assigned a velocity vector and the particles then cooperatively explore the solution space in search of an objective function optima.", "A global best position variable (p.sub.g) stores the best location among all particles. The velocity of each particle is then changed towards p.sub.i and p.sub.g in a dynamical way"] As to independent claim 8, Cheng teaches a non-transitory computer readable storage medium storing instructions that when executed by one or more computer processors, cause the one or more computer processors to perform steps for tuning postprocessing hyperparameters of a machine learning model, the steps comprising: [computer with storage, instructions and processing Col. 4 ln. 13-35], [tuning threshold parameters with PSO Col. 4 ln. 46-50"] initializing a population of vectors, each vector representing values of postprocessing hyperparameters for processing output score of the machine learning model; [maintains a population of solutions and parameters (threshold-offsets) Col. 5 ln. 15-39 " tuning the threshold-offsets with PSO"…"A population of potential solutions is maintained as the positions of a set of particles in a solution space"], [output score (responses r =0, >0) Col. 6 ln. 5-10 " A vote is cast by comparing the response to a predetermined value, such as zero. As a non-limiting example, if the response is equal to or greater than zero, it is one class; if less than zero, the other class."] repeating a plurality of times: adding one or more vectors to the population of vectors, the one or more vectors generated from existing vectors of the population; [new vectors from existing (changes position/velocity to best) col. 5 ln. 25-39 " keeping track of the current position x.sub.i, and the best solution (p.sub.i) it has observed so far. A global best position variable (p.sub.g) stores the best location among all particles. The velocity of each particle is then changed "] determining a fitness metric value for each of the population of vectors, comprising: [outputs a ROC/AUC to measure fitness Col. 5 ln. 51-58 "optimize objective functions that utilize the geometric or arithmetic mean of the areas under the ROC (AUC) of each class or that of the true-positive (TP) rate given a particular false-positive rate"] executing the machine learning model for an input data to obtain an output score, the input data associated with a ground truth label; [executes a classifier model and outputs a score (responses r =0, >0) Col. 6 ln. 1-10 " an input image (or image patch) 102 is received. A pair-wise multi-class classifier 104 first generates M(M-1)/2 responses r 106"…"A vote is cast by comparing the response to a predetermined value, such as zero. As a non-limiting example, if the response is equal to or greater than zero, it is one class; if less than zero, the other class."] processing the output score using postprocessing hyperparameters represented by the vector to obtain a final output result; and [applies threshold to response R to produce final output Col. 6 ln. 1-27 "The set of threshold-offsets x is applied 110 to the responses r 106 to obtain r' 112. Finally, a majority vote 114 is performed based on the responses to determine which class the input features are predicted to belong. The output y 116 represents the class prediction for each input feature"] selecting a vector from the population of vectors based on the fitness metric values; and [uses threshold metric to select the best position to optimize Col. 5 ln. 16-50 "Each particle is assigned a velocity vector and the particles then cooperatively explore the solution space in search of an objective function optima.", "A global best position variable (p.sub.g) stores the best location among all particles. The velocity of each particle is then changed towards p.sub.i and p.sub.g in a dynamical way"] Cheng does not specifically teach determining the fitness metric value based on a difference between the final output result and the ground truth label; However, Madineni teaches determining the fitness metric value based on a difference between the final output result and the ground truth label; [validates output (against valid labels in test image) for error percentage (fitness) ¶25-27 "the final output of the classification, which is compared against the associated labels."…"trained neural network 140 passes the validation step (e.g., satisfactory percentage of errors for a particular runtime scenario), the trained neural network 140 and post-processing logic 146 may then be implemented in an application (e.g., a neural network inference application) in a run time environment to classify objects detected in an image"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the classification process disclosed by Cheng by incorporating the determining the fitness metric value based on a difference between the final output result and the ground truth label by Madineni because both techniques address the same field of machine learning and by incorporating Madineni into Cheng improves classification solutions that are easily adaptable to multiple classification scenarios, including known hierarchies having two or more levels, and that provide performance or other advantages over conventional systems [Madineni ¶4] Cheng and Madineni do not specifically teach removing one or more vectors from the population based on fitness metric values of the one or more vectors; and using values of postprocessing hyperparameters from the vector selected for postprocessing of output of the machine learning model. However, Koch teaches removing one or more vectors from the population based on fitness metric values of the one or more vectors; [removes hyperparameter configurations Col. 34. 30-35 from configuration list (population of vectors) iteratively Col 32 ln. 4-22 "Grid search methods may be used in a first iteration to define the first set of hyperparameter configurations that sample the search space. The initial configuration list 322 is also called a “population”."] using values of postprocessing hyperparameters from the vector selected for postprocessing of output of the machine learning model. [final hyperparameters are used (selected) based on best Col 35 ln. 15-18 "a final hyperparameter configuration is selected based on the hyperparameter configuration that generated the best or lowest objective function value."] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the classification process disclosed by Cheng and Madineni by incorporating the removing one or more vectors from the population based on fitness metric values of the one or more vectors; and using values of postprocessing hyperparameters from the vector selected for postprocessing of output of the machine learning model disclosed by Koch because all techniques address the same field of machine learning and by incorporating Koch into Cheng and Madineni better identifies ideal hyperparameters with less manual effort [Koch Col. 48-61] As to dependent claim 9, the rejection of claim 8 is incorporated, Cheng, Madineni and Koch further teach wherein the machine learning model classifies an input data to one of a plurality of categories, wherein the output score of the machine learning model is mapped to a category based on a plurality of thresholds, each threshold representing a postprocessing hyperparameter. [Cheng multi-class classifier with classes (model with multiple categories) and responses (scores) are compared to a value (threshold) Col. 1-2 ln. 53-5 " Each classification response in the set of classification responses r' is compared to a predetermined value to classify the set of input features as belonging to an object class."] As to dependent claim 10, the rejection of claim 9 is incorporated, Cheng, Madineni and Koch further teach wherein each vector of the population of vectors represents a plurality of values, each value corresponding to a threshold from the plurality of thresholds. [Cheng velocity and position vectors, values and sets of thresholds Col. 5 ln. 16-50 ln. 53-5 "multiple thresholds need to be tuned in order to obtain the thresholds that achieve the desired operating points. The invention described herein solves this problem by tuning the threshold-offsets with PSO"…"A population of potential solutions is maintained as the positions of a set of particles in a solution space, where each dimension represents one solution component"] As to dependent claim 11, the rejection of claim 18 is incorporated, Cheng, Madineni and Koch further teach wherein the fitness metric value is determined as an aggregate measure of difference between each final output result and corresponding ground truth label for a plurality of samples, each sample representing an input data with a ground truth label. [Cheng AUC/TP/ROC are aggregates (sums/products) per class to measure performance Col. 5 ln. 51-60 "A population of potential solutions is maintained as the positions of a set of particles in a solution space, where each dimension represents one solution component"] As to dependent claim 12, the rejection of claim 8 is incorporated, Cheng, Madineni and Koch further teach wherein generating one or more vectors from existing vectors of the population comprises, identifying a vector from the population of vectors and modifying one or more elements of the vector to obtain a new vector. [Cheng optimize by changing velocity vector Col. 5 ln. 16-50 "Each particle is assigned a velocity vector and the particles then cooperatively explore the solution space in search of an objective function optima.", "A global best position variable (p.sub.g) stores the best location among all particles. The velocity of each particle is then changed towards p.sub.i and p.sub.g in a dynamical way"] As to independent claim 15, Cheng teaches a computer system comprising: [computer Col. 4 ln. 13-35] one or more computer processors; and [computer processing Col. 4 ln. 13-35] a non-transitory computer readable storage medium storing instructions that when executed by one or more computer processors, cause the one or more computer processors to perform steps for tuning postprocessing hyperparameters of a machine learning model, the steps comprising: [computer with storage, instructions and processing Col. 4 ln. 13-35], [tuning threshold parameters with PSO Col. 4 ln. 46-50"] initializing a population of vectors, each vector representing values of postprocessing hyperparameters for processing output score of the machine learning model; [maintains a population of solutions and parameters (threshold-offsets) Col. 5 ln. 15-39 " tuning the threshold-offsets with PSO"…"A population of potential solutions is maintained as the positions of a set of particles in a solution space"], [output score (responses r =0, >0) Col. 6 ln. 5-10 " A vote is cast by comparing the response to a predetermined value, such as zero. As a non-limiting example, if the response is equal to or greater than zero, it is one class; if less than zero, the other class."] repeating a plurality of times: adding one or more vectors to the population of vectors, the one or more vectors generated from existing vectors of the population; [new vectors from existing (changes position/velocity to best) col. 5 ln. 25-39 " keeping track of the current position x.sub.i, and the best solution (p.sub.i) it has observed so far. A global best position variable (p.sub.g) stores the best location among all particles. The velocity of each particle is then changed "] determining a fitness metric value for each of the population of vectors, comprising: [outputs a ROC/AUC to measure fitness Col. 5 ln. 51-58 "optimize objective functions that utilize the geometric or arithmetic mean of the areas under the ROC (AUC) of each class or that of the true-positive (TP) rate given a particular false-positive rate"] executing the machine learning model for an input data to obtain an output score, the input data associated with a ground truth label; [executes a classifier model and outputs a score (responses r =0, >0) Col. 6 ln. 1-10 " an input image (or image patch) 102 is received. A pair-wise multi-class classifier 104 first generates M(M-1)/2 responses r 106"…"A vote is cast by comparing the response to a predetermined value, such as zero. As a non-limiting example, if the response is equal to or greater than zero, it is one class; if less than zero, the other class."] processing the output score using postprocessing hyperparameters represented by the vector to obtain a final output result; and [applies threshold to response R to produce final output Col. 6 ln. 1-27 "The set of threshold-offsets x is applied 110 to the responses r 106 to obtain r' 112. Finally, a majority vote 114 is performed based on the responses to determine which class the input features are predicted to belong. The output y 116 represents the class prediction for each input feature"] selecting a vector from the population of vectors based on the fitness metric values; and [uses threshold metric to select the best position to optimize Col. 5 ln. 16-50 "Each particle is assigned a velocity vector and the particles then cooperatively explore the solution space in search of an objective function optima.", "A global best position variable (p.sub.g) stores the best location among all particles. The velocity of each particle is then changed towards p.sub.i and p.sub.g in a dynamical way"] Cheng does not specifically teach determining the fitness metric value based on a difference between the final output result and the ground truth label; However, Madineni teaches determining the fitness metric value based on a difference between the final output result and the ground truth label; [validates output (against valid labels in test image) for error percentage (fitness) ¶25-27 "the final output of the classification, which is compared against the associated labels."…"trained neural network 140 passes the validation step (e.g., satisfactory percentage of errors for a particular runtime scenario), the trained neural network 140 and post-processing logic 146 may then be implemented in an application (e.g., a neural network inference application) in a run time environment to classify objects detected in an image"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the classification process disclosed by Cheng by incorporating the determining the fitness metric value based on a difference between the final output result and the ground truth label by Madineni because both techniques address the same field of machine learning and by incorporating Madineni into Cheng improves classification solutions that are easily adaptable to multiple classification scenarios, including known hierarchies having two or more levels, and that provide performance or other advantages over conventional systems [Madineni ¶4] Cheng and Madineni do not specifically teach removing one or more vectors from the population based on fitness metric values of the one or more vectors; and using values of postprocessing hyperparameters from the vector selected for postprocessing of output of the machine learning model. However, Koch teaches removing one or more vectors from the population based on fitness metric values of the one or more vectors; [removes hyperparameter configurations Col. 34. 30-35 from configuration list (population of vectors) iteratively Col 32 ln. 4-22 "Grid search methods may be used in a first iteration to define the first set of hyperparameter configurations that sample the search space. The initial configuration list 322 is also called a “population”."] using values of postprocessing hyperparameters from the vector selected for postprocessing of output of the machine learning model. [final hyperparameters are used (selected) based on best Col 35 ln. 15-18 "a final hyperparameter configuration is selected based on the hyperparameter configuration that generated the best or lowest objective function value."] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the classification process disclosed by Cheng and Madineni by incorporating the removing one or more vectors from the population based on fitness metric values of the one or more vectors; and using values of postprocessing hyperparameters from the vector selected for postprocessing of output of the machine learning model disclosed by Koch because all techniques address the same field of machine learning and by incorporating Koch into Cheng and Madineni better identifies ideal hyperparameters with less manual effort [Koch Col. 48-61] As to dependent claim 16, the rejection of claim 15 is incorporated, Cheng, Madineni and Koch further teach wherein the machine learning model classifies an input data to one of a plurality of categories, wherein the output score of the machine learning model is mapped to a category based on a plurality of thresholds, each threshold representing a postprocessing hyperparameter. [Cheng multi-class classifier with classes (model with multiple categories) and responses (scores) are compared to a value (threshold) Col. 1-2 ln. 53-5 " Each classification response in the set of classification responses r' is compared to a predetermined value to classify the set of input features as belonging to an object class."] As to dependent claim 17, the rejection of claim 16 is incorporated, Cheng, Madineni and Koch further teach wherein each vector of the population of vectors represents a plurality of values, each value corresponding to a threshold from the plurality of thresholds. [Cheng velocity and position vectors, values and sets of thresholds Col. 5 ln. 16-50 ln. 53-5 "multiple thresholds need to be tuned in order to obtain the thresholds that achieve the desired operating points. The invention described herein solves this problem by tuning the threshold-offsets with PSO"…"A population of potential solutions is maintained as the positions of a set of particles in a solution space, where each dimension represents one solution component"] As to dependent claim 18, the rejection of claim 15 is incorporated, Cheng, Madineni and Koch further teach wherein the fitness metric value is determined as an aggregate measure of difference between each final output result and corresponding ground truth label for a plurality of samples, each sample representing an input data with a ground truth label. [Cheng AUC/TP/ROC are aggregates (sums/products) per class to measure performance Col. 5 ln. 51-60 "A population of potential solutions is maintained as the positions of a set of particles in a solution space, where each dimension represents one solution component"] As to dependent claim 19, the rejection of claim 15 is incorporated, Cheng, Madineni and Koch further teach wherein generating one or more vectors from existing vectors of the population comprises, identifying a vector from the population of vectors and modifying one or more elements of the vector to obtain a new vector. [Cheng optimize by changing velocity vector Col. 5 ln. 16-50 "Each particle is assigned a velocity vector and the particles then cooperatively explore the solution space in search of an objective function optima.", "A global best position variable (p.sub.g) stores the best location among all particles. The velocity of each particle is then changed towards p.sub.i and p.sub.g in a dynamical way"] Claims 6, 13 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Cheng in view of Madineni and Koch, as applied in claim 1, 8 and 15 above, and further in view of Hassan et al. (US 6052082 A hereinafter Hassan) As to dependent claim 6, Cheng, Madineni and Koch teach the method of claim 1 above that is incorporated, Cheng, Madineni and Koch do not specifically teach generating one or more vectors from existing vectors of the population comprises, identifying a first vector and a second vector from the population of vectors and swapping an element of the first vector with corresponding element of the second vector to obtain a modified first vector and a modified second vector. However, Hassan teaches wherein generating one or more vectors from existing vectors of the population comprises, identifying a first vector and a second vector from the population of vectors and swapping an element of the first vector with corresponding element of the second vector to obtain a modified first vector and a modified second vector. [cross-over operation exchanges positions (swaps) in a population Col. 8-9 ln. 57-3 "(8) Next, the FIG. 2 step to Recombine New Population 26 is performed to create new, improved solutions--a key feature being that of a "crossover" operation in which the G/EA seeks to construct better member-solutions by using a probability of crossover, P.sub.C, to combine the features of good existing member-solutions. In nature, an offspring population-member is rarely an exact clone of one parent, and instead inherits genes from both parents. The G/EA attempts to replicate this natural phenomenon by a mathematical crossover operation. The simplest form of crossover is one point crossover. Two-point crossover is where two crossover points are randomly selected and the substrings between (and including) those positions are exchanged."] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the classification processes disclosed by Cheng, Madineni and Koch by incorporating the generating one or more vectors from existing vectors of the population comprises, identifying a first vector and a second vector from the population of vectors and swapping an element of the first vector with corresponding element of the second vector to obtain a modified first vector and a modified second vector disclosed by Hassan because all techniques address the same field of machine learning and by incorporating Hassan into Cheng, Madineni and Koch utilize processing time more efficiently while at the same time provide sufficient or optimal solutions to resolving values [Hassan Col. 4 ln. 58-65] As to dependent claim 13, Cheng, Madineni and Koch teach the method of claim 8 above that is incorporated, Cheng, Madineni and Koch do not specifically teach generating one or more vectors from existing vectors of the population comprises, identifying a first vector and a second vector from the population of vectors and swapping an element of the first vector with corresponding element of the second vector to obtain a modified first vector and a modified second vector. However, Hassan teaches wherein generating one or more vectors from existing vectors of the population comprises, identifying a first vector and a second vector from the population of vectors and swapping an element of the first vector with corresponding element of the second vector to obtain a modified first vector and a modified second vector. [cross-over operation exchanges positions (swaps) in a population Col. 8-9 ln. 57-3 "(8) Next, the FIG. 2 step to Recombine New Population 26 is performed to create new, improved solutions--a key feature being that of a "crossover" operation in which the G/EA seeks to construct better member-solutions by using a probability of crossover, P.sub.C, to combine the features of good existing member-solutions. In nature, an offspring population-member is rarely an exact clone of one parent, and instead inherits genes from both parents. The G/EA attempts to replicate this natural phenomenon by a mathematical crossover operation. The simplest form of crossover is one point crossover. Two-point crossover is where two crossover points are randomly selected and the substrings between (and including) those positions are exchanged."] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the classification processes disclosed by Cheng, Madineni and Koch by incorporating the generating one or more vectors from existing vectors of the population comprises, identifying a first vector and a second vector from the population of vectors and swapping an element of the first vector with corresponding element of the second vector to obtain a modified first vector and a modified second vector disclosed by Hassan because all techniques address the same field of machine learning and by incorporating Hassan into Cheng, Madineni and Koch utilize processing time more efficiently while at the same time provide sufficient or optimal solutions to resolving values [Hassan Col. 4 ln. 58-65] As to dependent claim 20, Cheng, Madineni and Koch teach the method of claim 15 above that is incorporated, Cheng, Madineni and Koch do not specifically teach generating one or more vectors from existing vectors of the population comprises, identifying a first vector and a second vector from the population of vectors and swapping an element of the first vector with corresponding element of the second vector to obtain a modified first vector and a modified second vector. However, Hassan teaches wherein generating one or more vectors from existing vectors of the population comprises, identifying a first vector and a second vector from the population of vectors and swapping an element of the first vector with corresponding element of the second vector to obtain a modified first vector and a modified second vector. [cross-over operation exchanges positions (swaps) in a population Col. 8-9 ln. 57-3 "(8) Next, the FIG. 2 step to Recombine New Population 26 is performed to create new, improved solutions--a key feature being that of a "crossover" operation in which the G/EA seeks to construct better member-solutions by using a probability of crossover, P.sub.C, to combine the features of good existing member-solutions. In nature, an offspring population-member is rarely an exact clone of one parent, and instead inherits genes from both parents. The G/EA attempts to replicate this natural phenomenon by a mathematical crossover operation. The simplest form of crossover is one point crossover. Two-point crossover is where two crossover points are randomly selected and the substrings between (and including) those positions are exchanged."] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the classification processes disclosed by Cheng, Madineni and Koch by incorporating the generating one or more vectors from existing vectors of the population comprises, identifying a first vector and a second vector from the population of vectors and swapping an element of the first vector with corresponding element of the second vector to obtain a modified first vector and a modified second vector disclosed by Hassan because all techniques address the same field of machine learning and by incorporating Hassan into Cheng, Madineni and Koch utilize processing time more efficiently while at the same time provide sufficient or optimal solutions to resolving values [Hassan Col. 4 ln. 58-65] Claims 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Cheng in view of Madineni and Koch, as applied in claim 1 and 8 above, and further in view of Patel et al. (US 11631233 B2 hereinafter Patel) As to dependent claim 7, Cheng, Madineni and Koch teach the method of claim 1 above that is incorporated, Cheng, Madineni and Koch do not specifically teach wherein a postprocessing hyperparameter represents size of a contour for filtering out noise from the output of the machine learning model. However, Patel teaches wherein a postprocessing hyperparameter represents size of a contour for filtering out noise from the output of the machine learning model. [contour size threshold and background noise Col. 10 ln. 41-51 "Identify (208g), in each of the plurality of binary ROI images, one or more contours of interest from among the plurality of contours, wherein the one or more contours of interest are i) closed contours and ii) have size above a predefined contour size (FIG. 3I). Thus, the resultant image post masking may have open contours, closed contours, and closed but very small contours. It is observed from heuristic approaches that background noise is present in closed contours, however too small closed contours not necessarily include the background noise"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the classification processes disclosed by Cheng, Madineni and Koch by incorporating the wherein a postprocessing hyperparameter represents size of a contour for filtering out noise from the output of the machine learning model disclosed by Patel because all techniques address the same field of machine learning and by incorporating Patel into Cheng, Madineni and Koch provides higher accuracy with background noise removal technique improved model outputs. [Patel Col. 1 ln. 46-60] As to dependent claim 14, Cheng, Madineni and Koch teach the method of claim 8 above that is incorporated, Cheng, Madineni and Koch do not specifically teach wherein a postprocessing hyperparameter represents size of a contour for filtering out noise from the output of the machine learning model. However, Patel teaches wherein a postprocessing hyperparameter represents size of a contour for filtering out noise from the output of the machine learning model. [contour size threshold and background noise Col. 10 ln. 41-51 "Identify (208g), in each of the plurality of binary ROI images, one or more contours of interest from among the plurality of contours, wherein the one or more contours of interest are i) closed contours and ii) have size above a predefined contour size (FIG. 3I). Thus, the resultant image post masking may have open contours, closed contours, and closed but very small contours. It is observed from heuristic approaches that background noise is present in closed contours, however too small closed contours not necessarily include the background noise"] Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention to modify the classification processes disclosed by Cheng, Madineni and Koch by incorporating the wherein a postprocessing hyperparameter represents size of a contour for filtering out noise from the output of the machine learning model disclosed by Patel because all techniques address the same field of machine learning and by incorporating Patel into Cheng, Madineni and Koch provides higher accuracy with background noise removal technique improved model outputs. [Patel Col. 1 ln. 46-60] Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Applicant is required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action. Giacobello et al. (US 9633671 B2) teaches optimization with automatic tuning (see Col. 8 ln. 61-65) It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33, 216 U.S.P.Q. 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 U.S.P.Q. 275, 277 (C.C.P.A. 1968)). Any inquiry concerning this communication or earlier communications from the examiner should be directed to Beau Spratt whose telephone number is 571 272 9919. The examiner can normally be reached 8:30am to 5:00pm (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached at 571 272 7212. The fax phone number for the organization where this application or proceeding is assigned is 571 483 7388. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866 217 9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800 786 9199 (IN USA OR CANADA) or 571 272 1000. /BEAU D SPRATT/ Primary Examiner, Art Unit 2143
Read full office action

Prosecution Timeline

Jun 06, 2023
Application Filed
Jan 30, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12595715
Cementing Lab Data Validation based On Machine Learning
2y 5m to grant Granted Apr 07, 2026
Patent 12596955
REWARD FEEDBACK FOR LEARNING CONTROL POLICIES USING NATURAL LANGUAGE AND VISION DATA
2y 5m to grant Granted Apr 07, 2026
Patent 12596956
INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD FOR PRESENTING REACTION-ADAPTIVE EXPLANATION OF AUTOMATIC OPERATIONS
2y 5m to grant Granted Apr 07, 2026
Patent 12561464
CATALYST 4 CONNECTIONS
2y 5m to grant Granted Feb 24, 2026
Patent 12561606
TECHNIQUES FOR POLL INTENTION DETECTION AND POLL CREATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+26.6%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 432 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month