Prosecution Insights
Last updated: April 19, 2026
Application No. 18/194,564

KNOWLEDGE BASED ARTIFICIAL INTELLIGENCE ARCHITECTURE FOR INDUSTRIAL SYSTEMS

Non-Final OA §101§102§103
Filed
Mar 31, 2023
Examiner
TRAN, TAN H
Art Unit
2141
Tech Center
2100 — Computer Architecture & Software
Assignee
Aitomatic, Inc.
OA Round
1 (Non-Final)
60%
Grant Probability
Moderate
1-2
OA Rounds
3y 6m
To Grant
92%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
184 granted / 307 resolved
+4.9% vs TC avg
Strong +32% interview lift
Without
With
+31.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
60 currently pending
Career history
367
Total Applications
across all art units

Statute-Specific Performance

§101
14.4%
-25.6% vs TC avg
§103
55.3%
+15.3% vs TC avg
§102
19.2%
-20.8% vs TC avg
§112
6.1%
-33.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 307 resolved cases

Office Action

§101 §102 §103
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION 2. This action is in response to the original filing on 03/31/2023. Claims 1-20 are pending and have been considered below. Information Disclosure Statement 3. The information disclosure statement (IDS(s)) submitted on 04/17/2023 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Objections 4. Claims 1, 12, and 20 are objected to because of the following informalities: Claims 1, 12, and 20 recite “wherein each rule makes s prediction based on one or more characteristics of the input data” where “wherein each rule makes a prediction based on one or more characteristics of the input data” was apparently intended. Claim Rejections - 35 USC § 101 5. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to the abstract idea without significantly more. Step 1, the claims are directed to the statutory categories of a method, system and medium. Step 2A Prong 1, Claims 1, 12, and 20 recite, in part, receiving a request for making a prediction based on an input data; providing the input data to a knowledge model, wherein the knowledge model is a rule-based model, wherein each rule makes a prediction based on one or more characteristics of the input data; executing the knowledge model to generate a first output representing a first prediction for the input data; providing the final output as the prediction based on the input data. The limitations of receiving input, applying rules, combining outputs, and selecting a result are activities that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. That is, other than reciting “one or more computer processors” in the context of the claims, the limitations encompass a person receiving input, applying rules, combining outputs, and selecting a result by hand using pen and paper. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. In addition, providing the input data to a machine learning based model, the machine learning based model trained to make the prediction based on a particular input data; executing the machine learning based model to generate a second output representing a second prediction for the input data; providing the first output and the second output to an ensemble model configured to combine results of the knowledge model and the machine learning based model; executing the ensemble model to determine a final output based on a combination of the first output and the second output are directed to “Mathematical Concept” grouping of abstract ideas. Accordingly, the claims recite an abstract idea. Step 2A Prong 2, this judicial exception is not integrated into a practical application. In particular, the claims recite the additional elements of “one or more computer processors”. The computer components in the claim are recited at a high-level of generality (i.e., as a generic processor performing a generic computer function) such that it amounts to no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Please see MPEP §2106.04.(a)(2).III.C. The claims also recite the additional element of “machine learning based model”. These limitations are recited at a high level of generality and provide no details on how this process is performed. The additional elements in the claims merely used as a tool to implement the abstract idea. Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception, either alone or in combination. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “one or more computer processors” and “machine learning based model” to perform the steps of the claims amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Please see MPEP §2106.05(b) and (g). The claim is not patent eligible. Claim 2 provides further limitations of “wherein the input data represents operational data from an industrial process, and a rule represents domain knowledge associated with the industrial process”. However, they do not disclose any additional elements that would amount to a practical application or significantly more than an abstract idea (adding insignificant extra-solution activity to the judicial exception). Claims 3 and 13 provide further limitations “wherein determining the final output by the ensemble model comprises: receiving a measure of accuracy of the second output generated by the machine learning based model; and responsive to the measure of accuracy of the second output of the machine learning based model indicating an accuracy below a threshold value, using the second output as the final output” to the abstract idea (Mental processes and/or Mathematical concepts) as rejected above. However, they do not disclose any additional elements that would amount to a practical application or significantly more than an abstract idea. Claims 4 and 14 provide further limitations “wherein determining the final output by the ensemble model comprises: receiving a first measure of accuracy of the first output generated by the knowledge model; receiving a second measure of accuracy of the second output generated by the machine learning based model; and determining the final output based on the combination of the first output and the second output based on at least one of the first measure of accuracy or the second measure of accuracy” to the abstract idea (Mental processes and/or Mathematical concepts) as rejected above. However, they do not disclose any additional elements that would amount to a practical application or significantly more than an abstract idea. Claims 5 and 15 provide further limitations “wherein the final output is based on the first output if a comparison of the first measure of accuracy and the second measure of accuracy indicates that the first output of the knowledge model has higher accuracy compared to the second output of the machine learning based model” to the abstract idea (Mental processes and/or Mathematical concepts) as rejected above. However, they do not disclose any additional elements that would amount to a practical application or significantly more than an abstract idea. Claims 6 and 16 provide further limitations “wherein the final output is based on the second output if a comparison of the first measure of accuracy and the second measure of accuracy indicates that the first output of the knowledge model has lower accuracy compared to the second output of the machine learning based model” to the abstract idea (Mental processes and/or Mathematical concepts) as rejected above. However, they do not disclose any additional elements that would amount to a practical application or significantly more than an abstract idea. Claims 7 and 17 provide further limitations “wherein the final output is a weighted aggregate of the first output and the second output, wherein a weight of each of the first output and the second output is determined based on a measure of accuracy of corresponding output” to the abstract idea (Mental processes and/or Mathematical concepts) as rejected above. However, they do not disclose any additional elements that would amount to a practical application or significantly more than an abstract idea. Claims 8 and 18 provide further limitations “wherein determining the final output by the ensemble model comprises: responsive to determining the final output based on the first output of the knowledge model, using the final output for training of the machine learning based model” to the abstract idea (Mental processes and/or Mathematical concepts) as rejected above. However, they do not disclose any additional elements that would amount to a practical application or significantly more than an abstract idea. Claims 9 and 19 provide further limitations “generating synthetic data as additional training data for the machine learning based model using the final output” to the abstract idea (Mental processes and/or Mathematical concepts) as rejected above. However, they do not disclose any additional elements that would amount to a practical application or significantly more than an abstract idea. Claim 10 provides further limitations of “wherein the input data is sensor data collected by a robot, wherein the final output is used for guiding an action performed by the robot”. However, they do not disclose any additional elements that would amount to a practical application or significantly more than an abstract idea (adding insignificant extra-solution activity to the judicial exception). Claim 11 provides further limitations of “wherein the input data represents data collected by an industrial process, wherein the final output is used for determining an action performed by the industrial process”. However, they do not disclose any additional elements that would amount to a practical application or significantly more than an abstract idea (adding insignificant extra-solution activity to the judicial exception). Claim Rejections - 35 USC § 102 6. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 7. Claims 1, 7, 12, 17, and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Cleere et al. (U.S. Patent Application Pub. No. US 20210389978 A1). Claim 1: Cleere teaches a computer-implemented method comprising: receiving a request (i.e. selecting a low priority processing object in Operation 510; para. [0090]) for making a prediction based on an input data (i.e. The process flow 500 begins with the low-priority prioritization module selecting a low priority processing object in Operation 510. As previously noted, the goal of prioritizing the low priority processing objects is to produce a prioritized listing of the objects so that the objects with a higher likelihood of being a member of the target group will be investigated before expiry of the time allocate for the investigation process; para. [0090]), the system receiving/identifying an input object for predictive scoring; providing the input data to a knowledge model, wherein the knowledge model is a rule-based model (i.e. fig. 5, probabilistic prioritization rules; para. [0091]), wherein each rule makes a prediction based on one or more characteristics of the input data (i.e. fig. 5, the low-priority prioritization module is configured in various embodiments to apply one or more probabilistic prioritization rules to the processing object in Operation 515. As previously mentioned, these rules may have a certain degree of reliability in identifying which objects in the inventory are likely to be members of a target group. Here, in particular embodiments, the probabilistic prioritization rules may have a past investigatory success measure over a relevance threshold; para. [0091-0093]); providing the input data to a machine learning based model (i.e. applies one or more machine learning models to the processing object; para. [0094]), the machine learning based model trained to make the prediction based on a particular input data (i.e. The low-priority prioritization module also applies one or more machine learning models to the processing object in various embodiments to identify a probability of the object being a member of the target group in Operation 520. The one or more machine learning models may be designed to assign a probability to the object based on the likelihood of the processing object being a member of the target group. For example, in particular embodiments, the models may assign a machine-learning-based probability score between zero and one with respect to the object being a member of the target group. The closer the probability is to one, the more likely the object is a member of the target group. Examples of machine learning models that can be used to generate machine-learning-based probability scores include neural networks, support vector machines (SVMs), Bayesian networks, unsupervised machine learning models such as clustering models, and/or the like; para. [0094]), the ML model receives the same processing object and generates a probability; executing the knowledge model to generate a first output representing a first prediction for the input data (i.e. a probabilistic weight value is assigned to each probabilistic prioritization rule. This weight value may be based on the past investigatory success measure for the rule. Here, in particular embodiments, the low-priority prioritization module may be configured to combine (e.g., add or multiple) the probabilistic weight values of the probabilistic prioritization rules that are satisfied by the processing object to determine a rule-based priority score for the object; para. [0093]); executing the machine learning based model to generate a second output representing a second prediction for the input data (i.e. the models may assign a machine-learning-based probability score between zero and one with respect to the object being a member of the target group. The closer the probability is to one, the more likely the object is a member of the target group; para. [0094]), the ML model receives the same processing object and generates a probability; providing the first output and the second output to an ensemble model configured to combine results of the knowledge model and the machine learning based model (i.e. fig. 5, Once the low-priority prioritization module has applied the probabilistic prioritization rules and determined a rule-based priority score for the object and applied the one or more machine learning models to generate a machine-learning-based probability score indicating the likelihood of the processing object being a member of the target group, the module calculates a hybrid prioritization score in Operation 525; para. [0095, 0096]), the system feeds both output into a hybrid scoring formula; executing the ensemble model to determine a final output based on a combination of the first output and the second output (i.e. the low-priority prioritization module may calculate the hybrid prioritization score by combining the rule-based priority score and the machine-learning-based probability score; para. [0096]); and providing the final output as the prediction based on the input data (i.e. Once all of the low priority processing objects have been processed, the low-priority prioritization module builds a prioritized list based on the hybrid prioritization scores for the objects in Operation 535. Accordingly, the prioritized list provides a listing of all of the low priority processing objects in order of importance relative to each other. The importance being a combination of the likelihood of the processing object being a member of the target group and the valuation magnitude of the object. Therefore, in the example, the importance is a combination of the likelihood the low priority insurance claim is subject to an overpayment and the dollar value of the claim; para. [0053, 0098]), the final hybrid score is used to build the prioritized list and display it to the user. Claim 7: Cleere teaches the computer-implemented method of claim 1. Cleere further teaches wherein the final output is a weighted aggregate of the first output and the second output (i.e. a probabilistic weight value is also assigned to each of the probabilistic prioritization rules 135 that is based on the rule's past investigatory success measure. Accordingly, a rule-based priority score may be determined for each claim found in the low priority list 130 based on the probabilistic weight values of the probability prioritization rules 135 that are applicable to the claim. For example, the probabilistic weight values for the applicable rules 135 may be combined (e.g., added or multiplied together) to determine the rule-based priority score for a claim. In addition, one or more machine learning models 140 are used to produce a score for each claim found in the low priority list 130 identifying a probability of the claim being subject to an overpayment. As described in further detail herein, the one or more machine learning models 140 may be any number of different types of predictive models; para. [0051]), wherein a weight of each of the first output and the second output is determined based on a measure of accuracy of corresponding output (i.e. a hybrid prioritization score 145 is determined for each low priority claim by combining the claim's rule-based priority score and machine-learning-based probability score. Depending on the embodiment, a number of different formulas may be used in combining the two scores. For instance, in particular embodiments, the hybrid prioritization score 145 for each claim is determined by multiplying the rule-based priority score, the machine-learning-based probability score, and a valuation magnitude for the claim. The value magnitude provides a measure of the value of the claim. For example, the value magnitude may be the dollar amount of the payment on the claim; para. [0052]). Claims 12 is similar in scope to Claim 1 and is rejected under a similar rationale. In addition, Cleere further teaches a non-transitory computer readable storage medium storing instructions (i.e. a computer program product may include a non-transitory computer-readable storage medium storing applications, programs, program modules; para. [0061]) that when executed by one or more computer processors, cause the one or more computer processors to perform steps (i.e. processors; para. [0074]). Claim 17 is similar in scope to Claim 7 and is rejected under a similar rationale. Claim 20 is similar in scope to Claim 12 and is rejected under a similar rationale. Claim Rejections – 35 USC § 103 8. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 9. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Cleere in view of Hackstein et al. (U.S. Patent Application Pub. No. US 20120304008 A1). Claim 2: Cleere teaches the computer-implemented method of claim 1. Cleere does not explicitly teach wherein the input data represents operational data from an industrial process, and a rule represents domain knowledge associated with the industrial process. However, Hackstein teaches wherein the input data represents operational data (i.e. The computer 110 receives data from a plurality of sensors 165 that may be connected to the computer through one or more data sources 160 such as data loggers. The sensors 165 are arranged to simultaneously acquire data to create a vector representing the condition of a machine at a given point in time; para. [0026]) from an industrial process (i.e. a method for classifying a measured feature vector as representing one of a normal machine condition and a fault machine condition, the measured feature vector including a set of feature states relating to a machine at a particular time; para. [0012]), and a rule represents domain knowledge associated with the industrial process (i.e. set of manually defined rules is received, each rule establishing a set of feature state ranges indicating one of the normal machine condition and the fault machine condition; para. [0012]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Cleere to include the feature of Hackstein. One would have been motivated to make this modification because it provides interpretability and accuracy in industrial process prediction. 10. Claims 3 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Cleere in view of Christiansen et al. (U.S. Patent Application Pub. No. US 20210125104 A1). Claim 3: Cleere teaches the computer-implemented method of claim 1. Cleere does not explicitly teach receiving a measure of accuracy of the second output generated by the machine learning based model; and responsive to the measure of accuracy of the second output of the machine learning based model indicating an accuracy below a threshold value, using the second output as the final output. Christiansen further teaches wherein determining the final output by the ensemble model comprises: receiving a measure of accuracy of the second output generated by the machine learning based model (i.e. There are many methods for calculating a confidence score in step 220. In general, methods for calculating the confidence score are either based on testing the robustness of the machine learning model 120 around the sample data, or providing an alternative method to calculate the outcome which does not rely on a deep machine learning algorithm. Thus, a wide range of machine learning algorithms and/or mathematical operations may be used. Example machine learning algorithms include a random decision forest, a regression algorithm, and the like. An example mathematical operations includes a distribution based on the Softmax operator; para. [0049]); and responsive to the measure of accuracy of the second output of the machine learning based model indicating an accuracy below a threshold value, using the second output as the final output (i.e. Step 240 is to determine whether the confidence score is below a predetermined confidence threshold. By “below a predetermined confidence threshold” it does not mean that the confidence score has to take a numerical value which is lower than the predetermined confidence threshold value (although it may be). Instead, the concept of “below the predetermined confidence level” means that the confidence score indicates that confidence is less than the predetermine confidence threshold. For example, if the confidence score takes a range between 0 and 1, where 0 is full confidence and 1 is no confidence, then “lower than a predetermined confidence threshold” means a numerical value above the predetermined confidence threshold, which itself takes a lower value (e.g. 0.2). Conversely, if the confidence score takes a range between 0 and 1, where 0 is no confidence and 1 is full confidence, then “lower than a predetermined confidence threshold” means a numerical value lower the predetermined confidence threshold, which itself takes a higher value (e.g. 0.8); para. [0056, 0057]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Cleere to include the feature of Christiansen. One would have been motivated to make this modification because it reduces errors when ML confidence is low. Claim 13 is similar in scope to Claim 3 and is rejected under a similar rationale. 11. Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Cleere in view of Mitelman et al. (U.S. Patent Application Pub. No. US 20200382527 A1). Claim 4: Cleere teaches the computer-implemented method of claim 1. Cleere further teaches wherein determining the final output by the ensemble model comprises: receiving a first measure of accuracy of the first output generated by the knowledge model (i.e. The term “past investigatory success measure” may refer to a data object that describes a measure of an accuracy of a particular rule against historical cases with known outcomes. For instance, in particular embodiments, a past investigatory success measure may be a measure of how accurately a corresponding rule for the measure identifies processing objects are members of a target group; para. [0030]). Cleere does not explicitly teach receiving a second measure of accuracy of the second output generated by the machine learning based model; and determining the final output based on the combination of the first output and the second output based on at least one of the first measure of accuracy or the second measure of accuracy. However, Mitelman teaches receiving a second measure of accuracy of the second output generated by the machine learning based model (i.e. For each classification, the supervised machine learning engine 122 may also generate an associated confidence score for the classification. The confidence score may be a number, or level, which represents how “confident” the model is in its identification. Some machine learning algorithms generate confidence scores as a part of the algorithms, and for other algorithms, the confidence may be calculated by another component of the network device profiling engine 120; para. [0036]); and determining the final output based on the combination of the first output and the second output based on at least one of the first measure of accuracy or the second measure of accuracy (i.e. FIG. 3 illustrates a progressive machine learning process 300 that may be used by the network device profiling engine 120 in accordance with example implementations. Referring to FIG. 3 in conjunction with FIG. 1, the supervised machine learning engine 122 communicates feature set data 314 that have relatively low associated confidence scores (i.e., the data 314 represents feature sets whose classifications by the supervised machine learning engine 122 had associated confidence scores below a particular confidence score threshold) to the active machine learning engine 124; para. [0043]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Cleere to include the feature of Mitelman. One would have been motivated to make this modification because it improves prediction reliability. Claim 14 is similar in scope to Claim 4 and is rejected under a similar rationale. 12. Claims 5-6 and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Cleere in view of Mitelman, and further in view of Bellegarda (U.S. Patent Application Pub. No. US 20120053946 A1). Claim 5: Cleere and Mitelman teach the computer-implemented method of claim 4. Cleere does not explicitly teach wherein the final output is based on the first output if a comparison of the first measure of accuracy and the second measure of accuracy indicates that the first output of the knowledge model has higher accuracy compared to the second output of the machine learning based model. However, Bellegarda teaches wherein the final output is based on the first output if a comparison of the first measure of accuracy and the second measure of accuracy indicates that the first output of the knowledge model has higher accuracy compared to the second output of the machine learning based model (i.e. comparing with the statistical assessment, any rule with a confidence score that is below a predetermined threshold, such as, for example, 50%, may be considered as unreliable; otherwise, the rule may be considered as reliable. In one embodiment, a tag generated by rule-based tagger 107 may be selected as the final POS tag if its corresponding confidence score is greater than a predetermined threshold; otherwise, a tag generated by statistical tagger 107 may be selected as the final POS tag; para. [0034]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Cleere and Mitelman to include the feature of Bellegarda. One would have been motivated to make this modification because it reduces errors when one model underperforms. Claim 6: Cleere and Mitelman teach the computer-implemented method of claim 4. Cleere does not explicitly teach wherein the final output is based on the second output if a comparison of the first measure of accuracy and the second measure of accuracy indicates that the first output of the knowledge model has lower accuracy compared to the second output of the machine learning based model. However, Bellegarda teaches wherein the final output is based on the second output if a comparison of the first measure of accuracy and the second measure of accuracy indicates that the first output of the knowledge model has lower accuracy compared to the second output of the machine learning based model (i.e. comparing with the statistical assessment, any rule with a confidence score that is below a predetermined threshold, such as, for example, 50%, may be considered as unreliable; otherwise, the rule may be considered as reliable. In one embodiment, a tag generated by rule-based tagger 107 may be selected as the final POS tag if its corresponding confidence score is greater than a predetermined threshold; otherwise, a tag generated by statistical tagger 107 may be selected as the final POS tag; para. [0034]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Cleere and Mitelman to include the feature of Bellegarda. One would have been motivated to make this modification because it reduces errors when one model underperforms. Claims 15-16 are similar in scope to Claim 5-6 and are rejected under a similar rationale. 13. Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Cleere in view of Wiener et al. (U.S. Patent Application Pub. No. US 20180314984 A1). Claim 8: Cleere teaches the computer-implemented method of claim 1. Cleere further teaches wherein determining the final output by the ensemble model comprises: responsive to determining the final output based on the first output of the knowledge model (i.e. the claims placed in the inventory 115 are initially investigated via an automated process using a set of deterministic prioritization rules 120. Depending on the embodiments, these deterministic prioritization rules 120 may be identified using different criteria. However, in general, each of the deterministic prioritization rules 120 is ideally sufficient at identifying an insurance claim that is likely subject to an overpayment. For example, a deterministic prioritization rule 120 may be defined that if the claim is for a particular medical procedure to be performed by a particular healthcare provider, then the rule 120 applies to the claim. The reason for this particular deterministic prioritization rule may be that the healthcare provider has a past history of submitting claims for overpayment on the particular medical procedure; para. [0046]). Cleere does not explicitly teach using the final output for training of the machine learning based model. However, Wiener teaches responsive to determining the final output based on the first output of the knowledge model, using the final output for training of the machine learning based model (i.e. It may be beneficial to retrain classifiers on specific application security data. In accordance with example implementations that are described here, one way (called “assisted classification” herein) to retrain classifiers is to designate a subset (a representative sample, for example) of all of the issues that are identified by a given set of application scan data for human auditing. One or multiple human auditor(s) may then evaluate the selected subset of issues for purposes a classifying whether the issues are in-scope or out-of-scope. The classifiers may then be retrained on the human audited security scan data associated with the designated subset of issues, and the retrained classifiers may be used to classify the remaining unaudited issues as well as possibly classify other issues in a data store that match classifiers' classification policies; para. [0018]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Cleere to include the feature of Wiener. One would have been motivated to make this modification because it improves ML model performance over time. Claim 18 is similar in scope to Claim 8 and is rejected under a similar rationale. 14. Claims 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Cleere in view of Wiener, and further in view of Barbosa et al. (U.S. Patent Pub. No. US 11341367 B1). Claim 9: Cleere and Wiener teach the computer-implemented method of claim 8. Cleere does not explicitly teach generating synthetic data as additional training data for the machine learning based model using the final output. However, Barbosa teaches generating synthetic data as additional training data for the machine learning based model using the final output (i.e. a synthetic training data generator generates high-quality synthetic training data by merging images from a set of background images with images from a set of user-provided or user-specified images (depicting objects of interest). For example, the synthetic training data generator may overlay one or multiple ones of the object-depicting images—which may have been modified, such as via one or more of resizing, rotating, filtering (e.g., via “softness” or “blur” filters, color filters, etc.)—over ones of the background images (which similarly may have been modified, e.g., via applying filters) to create the new synthetic image set. By picking different combinations of images, placing object images in different locations and/or with different transformations, etc., the synthetic training data generator may thus generate many variations of images including the desired objects, which is tremendously helpful for training machine learning models; col. 2, lines 53-67). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Cleere and Wiener to include the feature of Barbosa. One would have been motivated to make this modification because it yields a more robust ML model that adapts better to unseen data. Claim 19 is similar in scope to Claim 9 and is rejected under a similar rationale. 15. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Cleere in view of Huang et al. (U.S. Patent Application Pub. No. US 20200306959 A1). Claim 10: Cleere teaches the computer-implemented method of claim 1. Cleere does not explicitly teach wherein the input data is sensor data collected by a robot, wherein the final output is used for guiding an action performed by the robot. However, Huang teaches wherein the input data is sensor data (i.e. grasp quality sensor (22) data; para. [0035]) collected by a robot (i.e. The systems and methods for hybrid machine learning (ML)-based training of object picking robots with real and simulated grasp performance data disclosed herein present a new and improved processing pipeline for solving robot picking problems; para. [0003]), wherein the final output is used for guiding an action performed by the robot (i.e. a system for training an object picking robot with real and simulated grasp performance data is provided. The system includes one or more memory devices, and one or more processors in communication with the one or more memory devices and the object picking robot. The one or more processors are programmed to assign a plurality of grasp locations on an object based on known or estimated physical properties of the object. The one or more processors are programmed to perform a first simulation experiment for the robot grasping the object using a first set of the plurality of assigned grasp locations; para. [0005]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Cleere to include the feature of Huang. One would have been motivated to make this modification because it yields more robust robotic control, reducing errors in navigation or manipulation. 16. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Cleere in view of Inagaki et al. (U.S. Patent Application Pub. No. US 20170031329 A1). Claim 11: Cleere teaches the computer-implemented method of claim 1. Cleere does not explicitly teach wherein the input data represents data collected by an industrial process, wherein the final output is used for determining an action performed by the industrial process. However, Inagaki teaches wherein the input data (i.e. observing a state variable comprising at least one of data output from a sensor that detects a state of one of the industrial machine and a surrounding environment, internal data of control software controlling the industrial machine, and computational data obtained based on one of the output data and the internal data; para. [0021]) represents data collected by an industrial process (i.e. the learning unit may be configured to learn the condition in accordance with the training data set generated for each of a plurality of industrial machines; para. [0010]), wherein the final output is used for determining an action performed by the industrial process (i.e. the fault prediction device further including a fault information output unit that outputs fault information indicating one of whether a fault has occurred in the industrial machine and a degree of fault, in response to input of a current state variable of the state variable, based on a result of learning by the learning unit in accordance with the training data set; para. [0013]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Cleere to include the feature of Inagaki. One would have been motivated to make this modification because it yields more robust robotic control, reducing errors and improving efficiency. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Moazzami et al. (Pub. No. US 20200042903 A1), these techniques support the use of “ensemble” modeling, which means multiple models can be generated and used to label data. These models include “base” models and a “fusion” model. It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33, 216 U.S.P.Q. 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 U.S.P.Q. 275, 277 (C.C.P.A. 1968)). Any inquiry concerning this communication or earlier communications from the examiner should be directed to TAN TRAN whose telephone number is (303)297-4266. The examiner can normally be reached on Monday - Thursday - 8:00 am - 5:00 pm MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matt Ell can be reached on 571-270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TAN H TRAN/Primary Examiner, Art Unit 2141
Read full office action

Prosecution Timeline

Mar 31, 2023
Application Filed
Nov 15, 2025
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594668
BRAIN-LIKE DECISION-MAKING AND MOTION CONTROL SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12579420
Analog Hardware Realization of Trained Neural Networks
2y 5m to grant Granted Mar 17, 2026
Patent 12579421
Analog Hardware Realization of Trained Neural Networks
2y 5m to grant Granted Mar 17, 2026
Patent 12572850
METHOD FOR IMPLEMENTING MODEL UPDATE AND DEVICE THEREOF
2y 5m to grant Granted Mar 10, 2026
Patent 12572326
DIGITAL ASSISTANT FOR MOVING AND COPYING GRAPHICAL ELEMENTS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
60%
Grant Probability
92%
With Interview (+31.8%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 307 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month