Prosecution Insights
Last updated: April 19, 2026
Application No. 17/166,319

MACHINE LEARNING APPROACH TO MULTI-DOMAIN PROCESS AUTOMATION AND USER FEEDBACK INTEGRATION

Final Rejection §101§112
Filed
Feb 03, 2021
Examiner
KWON, JUN
Art Unit
2127
Tech Center
2100 — Computer Architecture & Software
Assignee
Siscale AI Inc.
OA Round
4 (Final)
38%
Grant Probability
At Risk
5-6
OA Rounds
4y 3m
To Grant
84%
With Interview

Examiner Intelligence

Grants only 38% of cases
38%
Career Allow Rate
26 granted / 68 resolved
-16.8% vs TC avg
Strong +46% interview lift
Without
With
+46.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
34 currently pending
Career history
102
Total Applications
across all art units

Statute-Specific Performance

§101
31.8%
-8.2% vs TC avg
§103
41.4%
+1.4% vs TC avg
§102
7.6%
-32.4% vs TC avg
§112
18.1%
-21.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 68 resolved cases

Office Action

§101 §112
Detailed Action This Office Action is in response to the remarks entered on 10/13/2025. Claims 1-25 remain pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Amended claims were received on 10/13/2025. Claim Objections is withdrawn. Claim Rejections - 35 USC § 112 Amended claims were received on 10/13/2025. 35 U.S.C. 112 rejection is withdrawn. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-25 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1, 2A Prong 1: generating, for each prediction: responsive to the prediction indicating a class for the corresponding record is an unknown class, assigning determining, responsive to determining to validate the prediction using the single user feedback, determining, responsive to determining to validate the prediction using multiple user feedback, determining, result representing a consensus of the multiple user feedback received from the two or more client devices for the prediction when the user feedback indicates that the user is unsure about the prediction; (mental process of judgment - determining whether the prediction has already been validated by a plurality of users can be done in one’s mind) generating, the multiple user feedback for the new class; (mental process of evaluation – determining a set of classes for a data based on user feedbacks can be performed in one’s mind) generating, the single user feedback and the one or more validated predictions determined from multiple user feedback; and (mental process of evaluation, as it merely recites validating a group of prediction results which can be done in the human mind) updating, one or more validated predictions determined from the single user feedback and the one or more validated predictions determined from the multiple user feedback; (mental process of evaluation and judgment – adding new data (classes) into a dataset based on prediction results can be done in one’s mind with the aid of pen and paper) 2A Prong 2: A method, comprising, by one or more computing devices: (mere instructions to apply an exception using a computer MPEP 2106.05(f)) generating, with a process automation system by executing a machine learning (ML) model (mere instructions to apply an exception using a computer MPEP 2106.05(f)) for each prediction: assigning at the process automation system (mere instructions to apply an exception using a computer MPEP 2106.05(f)) receiving, at the process automation system from at least one client device, at least one of single user feedback describing a quality of the prediction; (insignificant extra-solution activity MPEP 2106.05(g) of gathering statistics) determining, with the process automation system (mere instructions to apply an exception using a computer MPEP 2106.05(f)) determining, with the process automation system (mere instructions to apply an exception using a computer MPEP 2106.05(f)) determining, with the process automation system (mere instructions to apply an exception using a computer MPEP 2106.05(f)) generating, at the process automation system, an extended set of classes (mere instructions to apply an exception using a computer MPEP 2106.05(f)) generating, with the process automation system (mere instructions to apply an exception using a computer MPEP 2106.05(f)) updating, at the process automation system (mere instructions to apply an exception using a computer MPEP 2106.05(f)) generating an updated ML model using the updated training set by further training the ML model, the updated model comprising an updated feature set and updated weights; and (mere instructions to apply an exception using a computer MPEP 2106.05(f)) storing, at the process automation system, the updated ML model, the updated training dataset, and performance metrics for the updated ML model. (insignificant extra-solution activity MPEP 2106.05(g)(iii) of mere data gathering) The additional elements as disclosed above alone or in combination do not integrate the judicial exception into practical application as they are mere insignificant extra solution activity, combination of generic computer functions that are restricted to field of use are implemented to perform the disclosed abstract idea above. 2B: A method, comprising, by one or more computing devices: (mere instructions to apply an exception using a computer MPEP 2106.05(f)) generating, with a process automation system by executing a machine learning (ML) model (mere instructions to apply an exception using a computer MPEP 2106.05(f)) for each prediction: receiving, at the process automation system from at least one client device, at least one of single user feedback describing a quality of the prediction; (indicated as insignificant extra-solution activity MPEP 2106.05(g) in Step 2A Prong 2. Therefore, it is re-evaluated as well understood, routine, and conventional activity MPEP 2106.05(d) of gathering statistics) determining, with the process automation system (mere instructions to apply an exception using a computer MPEP 2106.05(f)) determining, with the process automation system (mere instructions to apply an exception using a computer MPEP 2106.05(f)) determining, with the process automation system (mere instructions to apply an exception using a computer MPEP 2106.05(f)) generating, at the process automation system, an extended set of classes (mere instructions to apply an exception using a computer MPEP 2106.05(f)) generating, with the process automation system (mere instructions to apply an exception using a computer MPEP 2106.05(f)) updating, at the process automation system (mere instructions to apply an exception using a computer MPEP 2106.05(f)) generating an updated ML model using the updated training set by further training the ML model, the updated model comprising an updated feature set and updated weights; and (mere instructions to apply an exception using a computer MPEP 2106.05(f)) storing, at the process automation system, the updated ML model, the updated training dataset, and performance metrics for the updated ML model. (indicated as an insignificant extra-solution activity MPEP 2106.05(g)(iii) of mere data gathering in Step 2A Prong 2. Re-evaluated in Step 2B as well understood, routine, and conventional MPEP 2106.05(d)(II)(iv) of storing information in memory) The additional elements as disclosed above in combination of the abstract idea are not sufficient to amount to significantly more than the judicial exception as they are well, understood, routine and conventional activity as disclosed in combination of generic computer functions and usage of elements that are restricted to field of use that are implemented to perform the disclosed abstract idea above. Regarding claim 2, 2A Prong 1: selecting a subset of the predictions for the multiple user feedback by a plurality of users; (observation mental process, as it merely recites selecting a data from a set of data which can be done in the human mind.) determining an agreement result for each of the selected subset of the predictions based on the multiple user feedback, wherein generating the user validated record pool includes incorporating the agreement result for the selected subset of predictions. (mental process of evaluation, as it merely recites determining the final result by combining selected predictions which can be done with the aid of pen and paper.) 2A Prong 2: further comprising, by the one or more computing devices: (mere instructions to apply an exception using a computer MPEP 2106.05(f).) receiving the single user feedback for each of the predictions; (mere data gathering which is an insignificant extra-solution activity MPEP 2106.05(g)) receiving the multiple user feedback for the selected subset of the predictions; (mere data gathering which is an insignificant extra-solution activity MPEP 2106.05(g)) 2B: further comprising, by the one or more computing devices: (mere instructions to apply an exception using a computer MPEP 2106.05(f).) receiving the single user feedback for each of the predictions; (was indicated as an insignificant extra-solution activity MPEP 2106.05(g) of mere data gathering in 2A Prong 2, thus re-evaluated as a well-understood, routine, and conventional activity MPEP 2106.05(d) of gathering statistics.) receiving the multiple user feedback for the selected subset of the predictions; (was indicated as an insignificant extra-solution activity MPEP 2106.05(g) of mere data gathering in 2A Prong 2, thus re-evaluated as a well-understood, routine, and conventional activity MPEP 2106.05(d) of gathering statistics.) Regarding claim 3, 2A Prong 1: wherein a prediction is selected for the multiple user feedback when the single user feedback for the prediction indicates user uncertainty regarding accuracy of the prediction. (mental process of evaluation, as it merely recites selecting a prediction based on user uncertainty regarding accuracy of the prediction, which can be done in the human mind.) 2A Prong 2: This judicial exception is not integrated into a practical application. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Regarding claim 4, 2A Prong 1: wherein the subset of the predictions for the multiple user feedback is selected based on analyzing similarity of the records. (mental process of evaluation, as it merely recites selecting a prediction based on a similarity of the records, which can be done in the human mind.) 2A Prong 2: This judicial exception is not integrated into a practical application. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Regarding 5, 2A Prong 1: wherein the subset of the predictions for the multiple user feedback is selected based on confidence levels of the predictions. (mental process of evaluation, as it merely recites selecting a prediction based on a confidence level, which can be done in the human mind.) 2A Prong 2: This judicial exception is not integrated into a practical application. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Regarding claim 6, 2A Prong 1: wherein the subset of the predictions for the multiple user feedback is selected based on user accuracy ratings. (mental process of evaluation, as it merely recites selecting a prediction based on accuracy ratings, which can be done in the human mind.) 2A Prong 2: This judicial exception is not integrated into a practical application. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Regarding claim 7, 2A Prong 1: wherein, for each prediction selected for the multiple user feedback, determining the agreement result includes determining a majority voting agreement. (mental process of evaluation, as it merely recites determining a majority voting agreement for the selected data which can be done in the human mind.) 2A Prong 2: This judicial exception is not integrated into a practical application. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Regarding claim 8, 2A Prong 1: wherein, for each prediction selected for the multiple user feedback, determining the agreement result includes weighting the multiple user feedback based on user accuracy rating. (mental process of evaluation, as it merely recites determining the agreement result for the selected data which can be done in the human mind.) 2A Prong 2: This judicial exception is not integrated into a practical application. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Regarding claim 9, 2A Prong 1: wherein, for each prediction selected for the multiple user feedback, determining the agreement result includes performing a consensus iteration including a higher-level agreement process that is used when a lower-level agreement process fails to determine the agreement result. (mental process of evaluation, as it merely recites determining the agreement result for the selected data which can be done in the human mind.) 2A Prong 2: This judicial exception is not integrated into a practical application. 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Regarding claim 10, 2A Prong 1: wherein defining the ML job includes: selecting a job category, the job category being associated with the ML model; (mental process of judgment, as it recites selecting a category for a job which can be done in the human mind.) selecting a dataset from the collection of datasets, wherein the ML model is trained using the dataset. (mental process of judgment, as it recites selecting a dataset for a model which can be done in the human mind.) 2A Prong 2: providing a user interface to a user device for defining a ML job, (mere instructions to apply an exception using a generic computer as a tool MPEP 2106.05(f).) further comprising, by the one or more computing devices and prior to generating the predictions for the features of the records using the ML model: (mere instructions to apply an exception using a computer MPEP 2106.05(f).) storing, in one or more storage modules, a pool of ML models including the ML model and a collection of datasets: and (insignificant extra-solution activity MPEP 2106.05(g).) 2B: providing a user interface to a user device for defining a ML job, (mere instructions to apply an exception using a generic computer as a tool MPEP 2106.05(f).) further comprising, by the one or more computing devices and prior to generating the predictions for the features of the records using the ML model: (mere instructions to apply an exception using a computer MPEP 2106.05(f).) storing, in one or more storage modules, a pool of ML models including the ML model and a collection of datasets: (well understood, routine, and conventional activity MPEP 2106.05(d) iv. Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93;) Regarding claim 11, 2A Prong 1: Incorporates the rejection of claim 10. 2A Prong 2: further comprising, by one or more computing devices, adding the ML model trained using the user validated record pool to the pool of ML models as a later version of the ML model trained using the dataset. (mere instructions to apply an exception MPEP 2106.05(f).) 2B: further comprising, by one or more computing devices, adding the ML model trained using the user validated record pool to the pool of ML models as a later version of the ML model trained using the dataset. (mere instructions to apply an exception MPEP 2106.05(f).) Regarding claim 12, 2A Prong 1: 2A Prong 2: further comprising, by one or more computing devices, validating (mere instructions to apply an exception MPEP 2106.05(f).) 2B: further comprising, by one or more computing devices, validating (mere instructions to apply an exception MPEP 2106.05(f).) Regarding claim 13, 2A Prong 1: Claim 13 is a system claim having similar limitation to the claim 1. Therefore, it is rejected under the same rationale as claim 1 above. 2A Prong 2: a system comprising: one or more computing devices configured to (mere instructions to apply an exception using a computer MPEP 2106.05(f).) 2B: a system comprising: one or more computing devices configured to (mere instructions to apply an exception using a computer MPEP 2106.05(f).) Claim 14 is a system claim having similar limitation to the claim 2. Therefore, it is rejected under the same rationale as the claim 2 above. Claim 15 is a system claim having similar limitation to claim 3. Therefore, it is rejected under the same rationale as claim 3 above. Claim 16 is a system claim having similar limitation to claim 4. Therefore, it is rejected under the same rationale as claim 4 above. Claim 17 is a system claim having similar limitation to claim 5. Therefore, it is rejected under the same rationale as claim 5 above. Claim 18 is a system claim having similar limitation to claim 6. Therefore, it is rejected under the same rationale as claim 6 above. Claim 19 is a system claim having similar limitation to claim 7. Therefore, it is rejected under the same rationale as claim 7 above. Claim 20 is a system claim having similar limitation to claim 8. Therefore, it is rejected under the same rationale as claim 8 above. Claim 21 is a system claim having similar limitation to claim 9. Therefore, it is rejected under the same rationale as claim 9 above. Claim 22 is a system claim having similar limitation to claim 10. Therefore, it is rejected under the same rationale as claim 10 above. Claim 23 is a system claim having similar limitation to claim 11. Therefore, it is rejected under the same rationale as claim 11 above. Claim 24 is a system claim having similar limitation to claim 12. Therefore, it is rejected under the same rationale as claim 12 above. Regarding claim 25, 2A Prong 1: Claim 25 is a system claim having similar limitation to the claim 1. Therefore, it is rejected under the same rationale as claim 1 above. 2A Prong 2: A non-transitory computer readable medium comprising stored instructions, which when executed by a processor, cause the processor to (mere instructions to apply an exception using a computer MPEP 2106.05(f).) 2B: A non-transitory computer readable medium comprising stored instructions, which when executed by a processor, cause the processor to (mere instructions to apply an exception using a computer MPEP 2106.05(f).) Response to Arguments Applicant's arguments filed 10/13/2025 have been fully considered but they are not persuasive. Claim Objections and 35 U.S.C. 112 Rejection Amended claims were received on 10/13/2025. Claim Objections and 35 U.S.C. 112 rejection is withdrawn. Response to Arguments under 35 U.S.C. 101 Applicant asserts that amended claim elements enumerate a specific example of collecting information to expand a class set for a ML model including both single and multiple user feedback, updating the training dataset to include the expanded class set and its feedback, and training a new ML model incorporating the new class and feedback, which would be impossible to feasibly perform in a human mind and are rooted in the technical training and implementation of a machine learning model. Examiner respectfully disagrees. The applicant only makes general allegation that the claim elements would be impossible to perform in a human mind. First, ‘collecting information to expand a class set including both single and multiple user feedback’ and ‘updating the training dataset to include the expanded class set and its feedback’ can practically be performed in the human mind – evaluating and correcting data set based on one’s decision can be performed in one’s mind with the aid of pencil and paper. For example, claim 1 merely recites assigning a new class to the prediction (classification, which can be performed mentally) and identifying records for validating the prediction (mental process of evaluating and finding records for validation), determining whether to validate the prediction using a single user feedback or multiple user feedbacks (evaluating the prediction and decision making) and validating the prediction using user feedback, generating a user validated record pool (evaluating the prediction and keeping record of it), and updating the training dataset (can be done with the aid of pen and paper). Second, training a new ML model incorporating the new class and feedback is directed to mere instructions to apply an exception using a computer MPEP 2106.05(f) as the ‘generating an updated ML model …’ is recited at a high level of generality. Claim 1 as a whole merely recites performing mental processes disclosed above using a generic computing device comprises a generic Machine Learning Model. Accordingly, arguments to claims 1, 13 and 25 are not persuasive. Therefore, arguments to remaining dependent claims 2-12 and 14-24 are not persuasive. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Stumpf et al, “Toward Harnessing User Feedback for Machine Learning”, 2007 (This prior art discloses utilizing user feedbacks to train a machine learning model) THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JUN KWON whose telephone number is (571)272-2072. The examiner can normally be reached Monday – Friday 7:30AM – 4:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abdullah Kawsar can be reached at (571)270-3169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JUN KWON/Examiner, Art Unit 2127 /ABDULLAH AL KAWSAR/Supervisory Patent Examiner, Art Unit 2127
Read full office action

Prosecution Timeline

Feb 03, 2021
Application Filed
Apr 05, 2024
Non-Final Rejection — §101, §112
Aug 09, 2024
Applicant Interview (Telephonic)
Aug 09, 2024
Examiner Interview Summary
Aug 16, 2024
Response Filed
Oct 29, 2024
Final Rejection — §101, §112
Mar 03, 2025
Request for Continued Examination
Mar 06, 2025
Response after Non-Final Action
Jun 10, 2025
Non-Final Rejection — §101, §112
Aug 20, 2025
Examiner Interview Summary
Aug 20, 2025
Applicant Interview (Telephonic)
Oct 13, 2025
Response Filed
Nov 20, 2025
Final Rejection — §101, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602569
EXTRACTING ENTITY RELATIONSHIPS FROM DIGITAL DOCUMENTS UTILIZING MULTI-VIEW NEURAL NETWORKS
2y 5m to grant Granted Apr 14, 2026
Patent 12602609
UPDATING MACHINE LEARNING TRAINING DATA USING GRAPHICAL INPUTS
2y 5m to grant Granted Apr 14, 2026
Patent 12579436
Tensorized LSTM with Adaptive Shared Memory for Learning Trends in Multivariate Time Series
2y 5m to grant Granted Mar 17, 2026
Patent 12572777
Policy-Based Control of Multimodal Machine Learning Model via Activation Analysis
2y 5m to grant Granted Mar 10, 2026
Patent 12493772
LAYERED MULTI-PROMPT ENGINEERING FOR PRE-TRAINED LARGE LANGUAGE MODELS
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
38%
Grant Probability
84%
With Interview (+46.2%)
4y 3m
Median Time to Grant
High
PTA Risk
Based on 68 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month