Prosecution Insights
Last updated: April 19, 2026
Application No. 16/039,965

CONVERSATIONAL OPTIMIZATION OF COGNITIVE MODELS

Final Rejection §103§112§DP
Filed
Jul 19, 2018
Examiner
BARNES JR, CARL E
Art Unit
2178
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
8 (Final)
32%
Grant Probability
At Risk
9-10
OA Rounds
4y 4m
To Grant
57%
With Interview

Examiner Intelligence

Grants only 32% of cases
32%
Career Allow Rate
65 granted / 202 resolved
-22.8% vs TC avg
Strong +25% interview lift
Without
With
+25.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 4m
Avg Prosecution
32 currently pending
Career history
234
Total Applications
across all art units

Statute-Specific Performance

§101
14.3%
-25.7% vs TC avg
§103
62.6%
+22.6% vs TC avg
§102
9.0%
-31.0% vs TC avg
§112
8.7%
-31.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 202 resolved cases

Office Action

§103 §112 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Claims 1-5, 10-11, 13-15, and 19-27 were previously pending and subject to a non-final action 09/18/2025. In the response filed on 12/18/2025, claims 1, 15, 20 and 23 were amended. Therefore, claims 1-5, 10-11, 13-15, and 19-27 are currently pending and subject to the final action below. Response to Arguments Applicant’s arguments filed 12/18/2025, with respects to claims 1, 3, 4, 6, 11, 15 and 20 under Double Patenting have been fully considered but is maintained. Applicants acknowledges the double patenting rejection and requested the rejection be held in abeyance because no claim in the present application is currently allowable. Applicant's arguments filed 12/18/2025, 1-5, 10-11, 13-15, and 19-27 under 35 U.S.C. 103 have been fully considered but they are not persuasive. Applicant’s argument: Applicant submits that the cited references fail to teach or suggest at least these elements of the claims. Additionally, during the interview, the Examiner agreed that the amendments appeared to require further search and consideration. Applicant respectfully submits the combination of cited references do not render the amended claims 1, 15, and 20 obvious. Examiner response: After careful consideration and review of applicant’s arguments. The examine respectfully disagrees that the amendments overcome the prior art of record for the following reasons below. Regarding independent claim 1, Rogers teaches: An apparatus comprising: a memory including program code; (Rogers – [pdf page 7] Run on a MacBook with a dual-core Intel i7 processor and 8GB of memory. Ava was configure to generate code using the Pandas, Matplotlib and scikit-learn libraries.) and a processor configured to access the memory and to execute the program code to: (Rogers – [pdf page 7] Run on a MacBook with a dual-core Intel i7 processor and 8GB of memory. Ava was configure to generate code using the Pandas, Matplotlib and scikit-learn libraries.) provide a list of datasets including a first dataset; (Rogers – [pdf page 2] Data Loading and Model Training. datasets from CSV files or databases) PNG media_image1.png 348 699 media_image1.png Greyscale process a first natural language user instruction from the user specifying the first dataset; (Rogers – [pdf page 2] User statement load data from train_sample.cvs.) PNG media_image2.png 527 468 media_image2.png Greyscale PNG media_image3.png 138 415 media_image3.png Greyscale divide the first dataset into a training set and a test set; (Rogers – [pdf page 3] Ava ask user do you want to split your data into train and test, Ava is suggesting to split the data and apply a metric.) present a first natural language statement to the user, suggesting a first cognitive algorithm for the user by processing a dataset comprising historical experiments, (Rogers – [pdf page 3] Ava statement: “choose among mean, median and most frequent to fill in missing values, the mean, median and most frequent are algorithms to use against the dataset suggested by Ava. Ava is the agent presenting statement to the user.) PNG media_image4.png 408 445 media_image4.png Greyscale process a second natural language user instruction from the user specifying the first cognitive algorithm and a user-suggested metric; (Rogers – [pdf page 3] User instructs Ava to use “mean” metric, run logistic regression… is the algorithm) PNG media_image5.png 815 1503 media_image5.png Greyscale select one or more metrics and a testing strategy based on the first dataset, (Rogers – [pdf page 3] user select 60-40 split and logistic regression with penalty 12 and solver or metric and test strategy on the dataset) wherein the one or more metrics comprise the user-suggested metric (Rogers – [pdf page 3] Ava ask user do you want to split your data into train and test, Ava is suggesting to split the data and apply a metric. The user request 60-40 split, and logistic regression. Ava split the data 60-40 and run logistic regression with penalty 12 and solver as shown in Fig. 2 timeline.) generate a cognitive model by applying the first cognitive algorithm to the training set of the first dataset; (Rogers – [pdf page 3] Fig. 2 Run logistic regression… Ava report back the accuracy after cross validation is 0.7730. The model (cognitive model) is generated.) test the cognitive model against a portion the test set of the first dataset based on the testing strategy; (Rogers – [pdf page 3] Fig. 2; Ava ask the question Do you want to run your model on test data? User response is YES; Test strategy is 60-40 split.) present a third natural language statement to the user indicating a result of testing the cognitive model, (Rogers – [pdf page 3] Fig. 2; Ava statement to the user “the testing accuracy after cross validation is 0.7730.) PNG media_image6.png 815 1503 media_image6.png Greyscale wherein the third natural language statement comprises the selected one or more metrics indicating an accuracy of the cognitive model, (Rogers – [pdf page 3] Fig. 2; Ava statement to the user “the are 2 values for column target. The support binary classification algorithms are decision trees, logistic regression. These algorithms are in addition to be used with the “mean” metric.) PNG media_image7.png 815 1503 media_image7.png Greyscale Michalak teaches: one or more errors, and a contribution of each error to the one or more metrics, (Michalak − [Col 16, 17; ll. 60-67, 1-27] identify an error resulting from a statistical language model trained using training data; The error may be an error that occurred when predicatively annotating certain text data to have a particular value or label, “An error with a particular value or label in the data”.) and an indication to provide one or more visual elements that identify a location of each error within the first dataset; (Michalak − [Col. 17 ll. 10-15] The error may be an error that occurred when predicatively annotating certain text data to have a particular value or label, for example a categorization error from named entity recognition models. Col. 17 ll. 23-25] change log to record errors and changes to value or labels. The change log with predict one or more questions corresponding to the one or more errors based on the result of testing the cognitive model; ([Col. 3 ll. 45-48] As used herein, an “agent” may refer to an autonomous program module configured to perform specific tasks on behalf of a host and without requiring the interaction of a user [Col 17; ll. 5-20] The annotations may be semantic annotations to text data, for creating annotated messages by generating, at least in part by a trained statistical language model, predictive labels that correspond to part-of-speech, syntactic role, sentiment, and/or other language patterns associated with the text data.) determine, without receiving a third natural language user instruction one or more actions addressing the one or more errors to the first dataset or the first cognitive algorithm, comprising combining two or more labels or refining text within a label; (Michalak − [Col. 3 ll. 45-48] As used herein, an “agent” may refer to an autonomous program module configured to perform specific tasks on behalf of a host and without requiring the interaction of a user [Col 17; ll. 5-20] cause a model training process 504 to correct the error and produce corrected, updated training data 520. generating, at least in part by a trained statistical language model, predictive labels that correspond to part-of-speech; the part-of-speech is two or more labels combined.) Brennan teaches: present a fourth natural language statement to the user recommending the one or more actions; (Brennan − [0042] first computing system 14, or other NLP question answering system having a ground truth verification engine 16. [0030] present cluster reclassification recommendations for SME review. Reclassification of ground truth data is one action.) and based on a fourth natural language user instruction from the user specifying a change based on the one or more recommended actions, make the change to the cognitive model. (Brennan − [0053] At step 411, the ground truth verification method updates and retrains the model based on the SME verification or correction input. The processing at step 411 may be performed at a cognitive system, such as the QA system 101, first computing system 14, or other NLP question answering system,) Kil teaches: wherein the historical experiments are selected based on similarity in label characteristics to the first dataset; (Kil − [0092] the data exploration dialog (750) may also include a data file history that provides a history of algorithms performed on a particular data file (740), including parameters. Such a data file history may facilitate evaluating the performance of composite algorithms. The data exploration dialog (750) may further include the ability to select between several and various algorithm. After retrieving the history of algorithms (historical experiments) the data exploration dialog which is based on the data file (740), The data exploration dialog (750) provides the ability to select one or more algorithms. Data file (740) parameters are labels) Examiner notes: The algorithms are a part of the historical experiments used to train a model. Simard teaches: and at least one additional metric recommended by a processor; present a second natural language statement to the user recommending one or more additional metrics based on the first dataset and prior learned behavior; (Simard – [0041] The method involves a subsequent step 510 of displaying metrics for a set of documents that have been classified using the classification model. The method further entails a step 520 of displaying a user feedback guide that presents recommended actions to refine the classification model. The recommended actions (which take the form of recommendations, suggestions, tips, etc.) may be specific to one of the metrics or applicable to two or more metrics. Examiner note: Suggestions/recommendation/tips to one of the metrics for the user to select) Michalak teaches: autonomous program module configured to perform specific tasks on behalf of a host and without requiring the interaction of a user and predict correct labels to the errors of the model. Therefore, the rejection is maintained. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-5, 10-11, 13-15, and 19-27 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 1, 15 and 20 recites the limitation of “predict one or more questions corresponding to the one or more errors based on the result of testing the cognitive model; determine, without receiving third natural language user instruction”. The applicant’s remarks recites paragraph 0050 and 0059-0060 and Fig. 4 specification for the claim amendment. Upon reviewing of recited paragraphs above, in addition to other specification paragraphs such as paragraph 0049. There is not clear support a predict one or more questions corresponding to the one or more errors based on the result of testing the cognitive model. There appears to be support for predicting one or more questions without receiving user instructions. However, the predicted one or more questions are corresponding to the one or more errors based on the result of testing the cognitive model. In paragraph 0049 of applicant’s specification recites that “in some implementations, the computer may anticipate questions the user may have. For example, the computer may anticipate that the user will want to know the accuracy of an experiment, and will make suggestions without having to ask. In another scenario, the computer may suggest, without being prompted by the user, trying a different dataset based on a determination that the new dataset may expose or confirm a potential inefficiency”. However, the system may anticipate/predict questions for the user but it does not explicitly recite it is “corresponding to the one or more errors based on the result of testing the cognitive model”. In paragraph 0050 of applicant’s specification recites that “The method 200 may include initiating at 202 a conversation to prompt a user to specify a dataset and settings, such as model hyper-parameters. For example, the system 100 of FIG. 1 may use a speaker or a display screen to ask the user if they would like to select a dataset 116 and an algorithm 118. When prompted, the system 100 may provide suggestions of either the algorithm 118 or data set 116. Where desired, the system 100 may present options of each, as well as descriptions to facilitate the selection by the user. Paragraph 0050 does not explicitly recite that the anticipate/predict question are “corresponding to the one or more errors based on the result of testing the cognitive model”. Dependent claims 2-5, 10-11, 13-14, 19, and 21-27 are rejected for fully incorporating the dependencies of their bases. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1, 3, 4, 11, 15 and 20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim claims 1-6, 8-9, 11-19 of U.S. Patent No. 10,810,994 in view of Ava: From Data to Insights Through Conversation Year: 2017, hereinafter “Rogers”) in view of Michalak (US PAT 9535902 B1, Filed Date: Aug. 28, 2015) in view of Brennan (US PGPUB: US 20180075368 A1, Filed Date: Sep. 12, 2016) in view of Kil (US PGPUB: US 20020138492 A1, Filed Date: Nov. 16, 2001) in view of Simard (US PGPUB: US 20140122486 A1, Filed Date: Oct. 31, 2012). US PAT: 10,810,994 B2 Instant App: 16/039,965 1. An apparatus comprising: a memory including program code comprising an application programming interface and a user interface; and a processor configured to access the memory and to execute the program code to 3. The apparatus of claim 1, further comprising recommending a dataset to a user to use in the cognitive model using a natural language dialogue. 4. The apparatus of claim 1, further comprising receiving user inputs selecting a recommended dataset to use in the cognitive model. 5. The apparatus of claim 1, further comprising recommending the algorithm to a user to use in the cognitive model. to receive user inputs selecting a recommended algorithm to use in the cognitive model, 18. The method of claim 15, further comprising recommending an algorithm to the user to use in the cognitive model. generate a cognitive model, to run analysis on the cognitive model that uses the selected recommended algorithm to determine a factor that is impacting a performance of the cognitive model, 11. The apparatus of claim 1, wherein the processor is further configured to provide an explanation of the analysis to a user. 9. The apparatus of claim 1, wherein the processor is further configured to retrieve data known to be accurate to use in the cognitive model. 14. The apparatus of claim 1, wherein the processor is further configured to run an algorithm on a dataset and reports results based on a selected metric. to determine an action based on the factor, to report at least one of the factor and the action to a user, 18. The method of claim 15, further comprising recommending an algorithm to the user to use in the cognitive model. and to use the action to generate a second cognitive model. 1. An apparatus comprising: a memory including program code; and a processor configured to access the memory and to execute the program code to: provide a list of datasets including a first dataset; process a first natural language user instruction from the user specifying the first dataset; present a first natural language statement to the user, suggesting a first cognitive algorithm for the user by processing a dataset comprising historical experiments, wherein the historical experiments are selected based on similarity in label characteristics to the first dataset; process a second natural language user instruction from the user specifying the first cognitive algorithm and a user-suggested metric; present a second natural language statement to the user recommending one or more additional metrics based on the first dataset and prior learned behavior; select one or more metrics and a testing strategy based on the first dataset, wherein the one or more metrics comprise the user-suggested metric and at least one additional metric recommended by a processor; generate a cognitive model by applying the first cognitive algorithm to the training set of the first dataset; test the cognitive model against a portion of the first dataset based on the testing strategy; present a third natural language statement to the user indicating a result of testing the cognitive model, wherein the third natural language statement comprises the selected one or more metrics indicating an accuracy of the cognitive model, one or more errors, and a contribution of each error to the one or more metrics, and an indication to provide one or more visual elements that identify a location of each error within the first dataset; based on a third natural language user instruction from the user requesting for suggestion about the result, determine, without receiving third natural language user instruction one or more actions addressing the one or more errors to the first dataset or the first cognitive algorithm, comprising combining two or more labels or refining text within a label; present a fourth natural language statement to the user recommending the one or more actions; and based on a fourth natural language user instruction from the user specifying a change based on the one or more recommended actions, make the change to the cognitive model. , 1. program code comprising an application programming interface and a user interface; 4. The apparatus of claim 1, wherein the program code includes an application programming interface and a user interface. 2. The apparatus of claim 1, wherein the processor is further configured to execute the program code to run a plurality of experiments to generate a plurality of cognitive models. 11. The apparatus of claim 1, wherein the processor is further configured to execute the program code to run a plurality of experiments to generate a plurality of cognitive models. 4. The apparatus of claim 1, further comprising receiving user inputs selecting a recommended dataset to use in the cognitive model. 6. The apparatus of claim 1, wherein the user input designates a recommended dataset. 6. The apparatus of claim 1, wherein the processor is further configured to store the cognitive model in the memory. 3. The apparatus of claim 1, wherein the processor is further configured to store the cognitive model in the memory. 1. program code comprising an application programming interface and a user interface; 4. The apparatus of claim 1, wherein the program code includes an application programming interface and a user interface. 2. The apparatus of claim 1, wherein the processor is further configured to execute the program code to run a plurality of experiments to generate a plurality of cognitive models. 11. The apparatus of claim 1, wherein the processor is further configured to execute the program code to run a plurality of experiments to generate a plurality of cognitive models. 4. The apparatus of claim 1, further comprising receiving user inputs selecting a recommended dataset to use in the cognitive model. 6. The apparatus of claim 1, wherein the user input designates a recommended dataset. 6. The apparatus of claim 1, wherein the processor is further configured to store the cognitive model in the memory. 3. The apparatus of claim 1, wherein the processor is further configured to store the cognitive model in the memory. As to claim 1 the only differences between the instant application 16/039,965 and the claims 1-6, 8-9, 11-19 of patent No. 10,810,994 is the limitation of: “by processing a dataset comprising historical experiments wherein the historical experiments are selected based on similarity in label characteristics to the first dataset; one or more additional metrics based on the first dataset and prior learned behavior; select one or more metrics and a testing strategy based on the first dataset, wherein the one or more metrics comprise the user-suggested metric and at least one additional metric recommended by a processor; and an indication to provide one or more visual elements that identify a location of each error within the first dataset; predict one or more questions corresponding to the one or more errors based on the result of testing the cognitive model; comprising combining two or more labels or refining text within a label;” Rogers teaches: present a first natural language statement to the user, suggesting a first cognitive algorithm for the user by processing a dataset comprising historical experiments, (Rogers – [pdf page 3] Ava statement: “choose among mean, median and most frequent to fill in missing values, the mean, median and most frequent are algorithms to use against the dataset suggested by Ava. Ava is the agent presenting statement to the user.) select one or more metrics and a testing strategy based on the first dataset, (Rogers – [pdf page 3] user select 60-40 split and logistic regression with penalty 12 and solver or metric and test strategy on the dataset) wherein the one or more metrics comprise the user-suggested metric (Rogers – [pdf page 3] Ava ask user do you want to split your data into train and test, Ava is suggesting to split the data and apply a metric. The user request 60-40 split, and logistic regression. Ava split the data 60-40 and run logistic regression with penalty 12 and solver as shown in Fig. 2 timeline.) Rogers does not explicitly teach: one or more additional metrics and at least one additional metric recommended by a processor; one or more additional metrics based on the first dataset and prior learned behavior; However, Simard teaches: and at least one additional metric recommended by a processor; (Simard – [0041] The method involves a subsequent step 510 of displaying metrics for a set of documents that have been classified using the classification model. The method further entails a step 520 of displaying a user feedback guide that presents recommended actions to refine the classification model. The recommended actions (which take the form of recommendations, suggestions, tips, etc.) may be specific to one of the metrics or applicable to two or more metrics. Examiner note: Suggestions/recommendation/tips to one of the metrics for the user to select) present a second natural language statement to the user recommending one or more additional metrics based on the first dataset and prior learned behavior; (Simard – [0041] The method involves a subsequent step 510 of displaying metrics for a set of documents that have been classified using the classification model. The method further entails a step 520 of displaying a user feedback guide that presents recommended actions to refine the classification model. The recommended actions (which take the form of recommendations, suggestions, tips, etc.) may be specific to one of the metrics or applicable to two or more metrics. Examiner note: Suggestions/recommendation/tips to one of the metrics for the user to select) Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combine Rogers, and Simard to provide a system for training model(s) and testing the trained model(s), with a reasonable expectation of success. The motivation to combines provides the benefit to improve model accuracy by identifying errors and providing recommendation to improve model accuracy. Rogers does not explicitly teach: and an indication to provide one or more visual elements that identify a location of each error within the first dataset; determine one or more actions addressing the one or more errors to the first dataset or the first cognitive algorithm, comprising combining two or more labels or refining text within a label; one or more errors, and a contribution of each error to the one or more metrics, (Michalak − [Col 16, 17; ll. 60-67, 1-27] identify an error resulting from a statistical language model trained using training data; The error may be an error that occurred when predicatively annotating certain text data to have a particular value or label, “An error with a particular value or label in the data”.) and an indication to provide one or more visual elements that identify a location of each error within the first dataset; (Michalak − [Col. 17 ll. 10-15] The error may be an error that occurred when predicatively annotating certain text data to have a particular value or label, for example a categorization error from named entity recognition models. Col. 17 ll. 23-25] change log to record errors and changes to value or labels. The change log with predict one or more questions corresponding to the one or more errors based on the result of testing the cognitive model; ([Col. 3 ll. 45-48] As used herein, an “agent” may refer to an autonomous program module configured to perform specific tasks on behalf of a host and without requiring the interaction of a user [Col 17; ll. 5-20] The annotations may be semantic annotations to text data, for creating annotated messages by generating, at least in part by a trained statistical language model, predictive labels that correspond to part-of-speech, syntactic role, sentiment, and/or other language patterns associated with the text data.) determine, without receiving a third natural language user instruction one or more actions addressing the one or more errors to the first dataset or the first cognitive algorithm, comprising combining two or more labels or refining text within a label; (Michalak − [Col. 3 ll. 45-48] As used herein, an “agent” may refer to an autonomous program module configured to perform specific tasks on behalf of a host and without requiring the interaction of a user [Col 17; ll. 5-20] cause a model training process 504 to correct the error and produce corrected, updated training data 520. generating, at least in part by a trained statistical language model, predictive labels that correspond to part-of-speech; the part-of-speech is two or more labels combined.) Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combine Rogers, Simard and Michalak to provide a system for training model(s) and testing the trained model(s), with a reasonable expectation of success since Rogers, Simard and Michalak are in the same field of endeavor of using natural language interfaces for machine learning and data analysis for tuning models. The motivation to combines provides the benefit to improve model accuracy by identifying errors and providing recommendation to improve model accuracy. Roger does not explicitly teach: wherein the historical experiments are selected based on similarity in label characteristics to the first dataset; However, Kil teaches: wherein the historical experiments are selected based on similarity in label characteristics to the first dataset; (Kil − [0092] the data exploration dialog (750) may also include a data file history that provides a history of algorithms performed on a particular data file (740), including parameters. Such a data file history may facilitate evaluating the performance of composite algorithms. The data exploration dialog (750) may further include the ability to select between several and various algorithm. After retrieving the history of algorithms (historical experiments) the data exploration dialog which is based on the data file (740), The data exploration dialog (750) provides the ability to select one or more algorithms. Data file (740) parameters are labels) Examiner notes: The algorithms are a part of the historical experiments used to train a model. Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combine Rogers, Michalak, Simard and Kil to provide a system for training model(s) and testing the trained model(s), with a reasonable expectation of success. The motivation to combines provides the benefit to improve model accuracy by identifying errors and providing recommendation to improve model accuracy. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3-5, 10-11, 13-15, and 19-23 are rejected under 35 U.S.C. 103 as being unpatentable over Ava: From Data to Insights Through Conversation Year: 2017, hereinafter “Rogers”) in view of Michalak (US PAT 9535902 B1, Filed Date: Aug. 28, 2015) in view of Brennan (US PGPUB: US 20180075368 A1, Filed Date: Sep. 12, 2016) in view of Kil (US PGPUB: US 20020138492 A1, Filed Date: Nov. 16, 2001) in view of Simard (US PGPUB: US 20140122486 A1, Filed Date: Oct. 31, 2012). Regarding independent claim 1, Rogers teaches: An apparatus comprising: a memory including program code; (Rogers – [pdf page 7] Run on a MacBook with a dual-core Intel i7 processor and 8GB of memory. Ava was configure to generate code using the Pandas, Matplotlib and scikit-learn libraries.) and a processor configured to access the memory and to execute the program code to: (Rogers – [pdf page 7] Run on a MacBook with a dual-core Intel i7 processor and 8GB of memory. Ava was configure to generate code using the Pandas, Matplotlib and scikit-learn libraries.) provide a list of datasets including a first dataset; (Rogers – [pdf page 2] Data Loading and Model Training. datasets from CSV files or databases) PNG media_image1.png 348 699 media_image1.png Greyscale process a first natural language user instruction from the user specifying the first dataset; (Rogers – [pdf page 2] User statement load data from train_sample.cvs.) PNG media_image2.png 527 468 media_image2.png Greyscale PNG media_image3.png 138 415 media_image3.png Greyscale present a first natural language statement to the user, suggesting a first cognitive algorithm for the user by processing a dataset comprising historical experiments, (Rogers – [pdf page 3] Ava statement: “choose among mean, median and most frequent to fill in missing values, the mean, median and most frequent are algorithms to use against the dataset suggested by Ava. Ava is the agent presenting statement to the user.) PNG media_image4.png 408 445 media_image4.png Greyscale process a second natural language user instruction from the user specifying the first cognitive algorithm and a user-suggested metric; (Rogers – [pdf page 3] User instructs Ava to use “mean” metric, run logistic regression… is the algorithm) PNG media_image5.png 815 1503 media_image5.png Greyscale select one or more metrics and a testing strategy based on the first dataset, (Rogers – [pdf page 3] user select 60-40 split and logistic regression with penalty 12 and solver or metric and test strategy on the dataset) wherein the one or more metrics comprises the user-suggested metric (Rogers – [pdf page 3] Ava ask user do you want to split your data into train and test, Ava is suggesting to split the data and apply a metric. The user request 60-40 split, and logistic regression. Ava split the data 60-40 and run logistic regression with penalty 12 and solver as shown in Fig. 2 timeline.) PNG media_image8.png 405 828 media_image8.png Greyscale generate a cognitive model based on a portion of the first dataset and the first cognitive algorithm; (Rogers – [pdf page 3] Fig. 2 Run logistic regression… Ava report back the accuracy after cross validation is 0.7730. The model (cognitive model) is generated.) test the cognitive model against a portion of the first dataset based on the testing strategy; (Rogers – [pdf page 3] Fig. 2; Ava ask the question Do you want to run your model on test data? User response is YES; Test strategy is 60-40 split.) present a third natural language statement to the user indicating a result of testing the cognitive model, (Rogers – [pdf page 3] Fig. 2; Ava statement to the user “the testing accuracy after cross validation is 0.7730.) PNG media_image6.png 815 1503 media_image6.png Greyscale wherein the second natural language statement comprises the selected metric one or more metrics indicating an accuracy of the cognitive model, (Rogers – [pdf page 3] Fig. 2; Ava statement to the user “the are 2 values for column target. The support binary classification algorithms are decision trees, logistic regression. These algorithms are in addition to be used with the “mean” metric.) PNG media_image7.png 815 1503 media_image7.png Greyscale Rogers does not explicitly teach: one or more errors, and a contribution of each error to the metric; the user requesting for suggestion about the result However, Michalak teaches: one or more errors, and a contribution of each error to the one or more metrics (Michalak − [Col 16, 17; ll. 60-67, 1-27] identify an error resulting from a statistical language model trained using training data; The error may be an error that occurred when predicatively annotating certain text data to have a particular value or label, “An error with a particular value or label in the data”.) and an indication to provide one or more visual elements that identify a location of each error within the first dataset; (Michalak − [Col. 17 ll. 10-15] The error may be an error that occurred when predicatively annotating certain text data to have a particular value or label, for example a categorization error from named entity recognition models. Col. 17 ll. 23-25] change log to record errors and changes to value or labels. The change log with annotation is one or more visual elements that identify a location of each error.) based on a third natural language user instruction from the user requesting for suggestion about the result, (Michalak − [Col 17; ll. 30-35] can be implemented through the use of an autonomous trainer agent that performs internal training,) determine one or more actions addressing the one or more errors to the first dataset or the first cognitive algorithm, comprising combining two or more labels or refining text within a label; (Michalak − [Col 17; ll. 5-20] cause a model training process 504 to correct the error and produce corrected, updated training data 520. generating, at least in part by a trained statistical language model, predictive labels that correspond to part-of-speech; the part-of-speech is two or more labels combined.) Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combine Rogers and Michalak to provide a system for training model(s) and testing the trained model(s), with a reasonable expectation of success since Rogers, and Michalak are in the same field of endeavor of using natural language interfaces for machine learning and data analysis for tuning models. The motivation to combines provides the benefit to improve model accuracy by identifying errors and providing recommendation to improve model accuracy. Rogers does not explicitly teach: present a third natural language statement to the user recommending the one or more actions; However, Brennan teaches: present a fourth natural language statement to the user recommending the one or more actions; (Brennan − [0042] first computing system 14, or other NLP question answering system having a ground truth verification engine 16. [0030] present cluster reclassification recommendations for SME review. Reclassification of ground truth data is one action.) and based on a fourth natural language user instruction from the user specifying a change based on the one or more recommended actions, make the change to the cognitive model. (Brennan − [0053] At step 411, the ground truth verification method updates and retrains the model based on the SME verification or correction input. The processing at step 411 may be performed at a cognitive system, such as the QA system 101, first computing system 14, or other NLP question answering system,) Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combine Rogers, Michalak and Brennan to provide a system for training model(s) and testing the trained model(s), with a reasonable expectation of success. The motivation to combines provides the benefit to improve model accuracy by identifying errors and providing recommendation to improve model accuracy. Roger teaches using historical logs for making recommendations but does not explicitly teach: wherein the historical experiments are selected based on similarity in label characteristics to the first dataset; However, Kil teaches: wherein the historical experiments are selected based on similarity in label characteristics to the first dataset; (Kil − [0092] the data exploration dialog (750) may also include a data file history that provides a history of algorithms performed on a particular data file (740), including parameters. Such a data file history may facilitate evaluating the performance of composite algorithms. The data exploration dialog (750) may further include the ability to select between several and various algorithm. After retrieving the history of algorithms (historical experiments) the data exploration dialog which is based on the data file (740), The data exploration dialog (750) provides the ability to select one or more algorithms. Data file (740) parameters are labels) Examiner notes: The algorithms are a part of the historical experiments used to train a model. Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combine Rogers, Michalak, Brennan and Kil to provide a system for training model(s) and testing the trained model(s), with a reasonable expectation of success. The motivation to combines provides the benefit to improve model accuracy by identifying errors and providing recommendation to improve model accuracy. Rogers does not explicitly teach: one or more additional metrics and at least one additional metric recommended by a processor; However, Simard teaches: and at least one additional metric recommended by a processor; (Simard – [0041] The method involves a subsequent step 510 of displaying metrics for a set of documents that have been classified using the classification model. The method further entails a step 520 of displaying a user feedback guide that presents recommended actions to refine the classification model. The recommended actions (which take the form of recommendations, suggestions, tips, etc.) may be specific to one of the metrics or applicable to two or more metrics. Examiner note: Suggestions/recommendation/tips to one of the metrics for the user to select) present a second natural language statement to the user recommending one or more additional metrics based on the first dataset and prior learned behavior; (Simard – [0041] The method involves a subsequent step 510 of displaying metrics for a set of documents that have been classified using the classification model. The method further entails a step 520 of displaying a user feedback guide that presents recommended actions to refine the classification model. The recommended actions (which take the form of recommendations, suggestions, tips, etc.) may be specific to one of the metrics or applicable to two or more metrics. Examiner note: Suggestions/recommendation/tips to one of the metrics for the user to select) Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combine Rogers, Michalak, Brennan, Kil and Simard to provide a system for training model(s) and testing the trained model(s), with a reasonable expectation of success. The motivation to combines provides the benefit to improve model accuracy by identifying errors and providing recommendation to improve model accuracy. Regarding dependents claim 3, depends on claim 1, Rogers does not explicitly teach: wherein the processor is further configured to store the cognitive model in the memory. However, Brennan teaches: wherein the processor is further configured to store the cognitive model in the memory. (Brennan − the training of the first classifier model at step 403; which is stored in the memory/database storage 20.) Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combine Rogers, Michalak, Brennan and Kil to provide a system for training model(s) and testing the trained model(s), with a reasonable expectation of success. The motivation to combines provides the benefit to improve model accuracy by identifying errors and providing recommendation to improve model accuracy. Regarding dependents claim 4, depends on claim 1, Rogers teaches: wherein the program code includes an application programming interface and a user interface. (Rogers – [pdf page 3] Fig. 3, Ava, an intelligent chatbot that is aimed at simplifying this search process. It uses a natural language chat-based interface. [pdf page 5] template code invoke APis in the machine learning platform) Regarding dependents claim 5, depends on claim 1, Roger does not explicitly teach: wherein the second natural language statement to the user recommending the change includes a recommended dataset. However, Brennan teaches: wherein the second natural language statement to the user recommending the change includes a recommended dataset. (Brennan − [0042] first computing system 14, or other NLP question answering system having a ground truth verification engine 16. [0030] present cluster reclassification recommendations for SME review. Reclassification of ground truth data is one action.) Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combine Rogers, Michalak, Brennan and Kil to provide a system for training model(s) and testing the trained model(s), with a reasonable expectation of success. The motivation to combines provides the benefit to improve model accuracy by identifying errors and providing recommendation to improve model accuracy. Regarding dependents claim 10, depends on claim 1, Rogers teaches: wherein the fourth natural language user instruction designates a recommended metric. (Rogers − [pdf page 2] Fig. 2 Ava chatbot ask do you want to run test? User confirms by saying yes.) Regarding dependents claim 11, depends on claim 1, Rogers teaches: wherein the processor is further configured to execute the program code to run a plurality of experiments to generate a plurality of cognitive models. (Rogers – [pdf page 2-3] Fig. 2 and Fig. 3;) Regarding dependents claim 13, depends on claim 1, Rogers teaches: wherein the processor is further configured to retrieve data known to be accurate to use in the cognitive model. (Rogers – [pdf page 2-3] Fig. 2 and Fig. 3;) Regarding dependents claim 14, depends on claim 1, Rogers teaches: wherein the processor is further configured to create data known to be accurate to use in the cognitive model. (Rogers – [pdf page 2] Fig. 2 encode categorial features of the dataset to be used in the model) Regarding independent claim 15, is directed to a method. Claim 15 has similar/same technical features/limitations as claim 1 and the claims are rejected under the same rational. Regarding dependents claim 19, depends on claim 1, Rogers teaches: comprising a fourth natural language statement reporting a metric to the user. (Rogers − [pdf page 2] Fig. 2 Ava chatbot ask do you want to run test? User confirms by saying yes.) Regarding independent claim 20, is directed to a method. Claim 20 has similar/same technical features/limitations as claim 1 and the claims are rejected under the same rational. Regarding dependents claim 21, depends on claim 1, Rogers teaches: wherein the metric and the testing strategy are selected by the processor or by processing a fifth natural language user instruction from the user. (Rogers – [pdf page 3] Fig. 2; User requesting a test strategy by stating: do a 60-40 split… also called train-split, User requesting a metric by stating: Show me a ROC plot; ROC plot is a classification error metric of the train_sample.csv dataset. Examiner Notes: ROC plot is also now as ROC curve. 60-40 split is the test strategy) Regarding dependents claim 22, depends on claim 15, Rogers teaches: wherein the metric and the testing strategy are selected by a processor or by processing a fifth natural language user instruction from the user. (Rogers – [pdf page 3] Fig. 2; User requesting a test strategy by stating: do a 60-40 split… also called train-split, User requesting a metric by stating: Show me a ROC plot; ROC plot is a classification error metric of the train_sample.csv dataset. Examiner Notes: ROC plot is also now as ROC curve. 60-40 split is the test strategy) Regarding dependents claim 23, depends on claim 20, Rogers teaches: wherein the program product comprises an application programming interface and a user interface. (Rogers – [pdf page 3] Fig. 3, Ava, an intelligent chatbot that is aimed at simplifying this search process. It uses a natural language chat-based interface. [pdf page 5] template code invoke APis in the machine learning platform) Claim(s) 2 is rejected under 35 U.S.C. 103 as being unpatentable over Rogers, Michalak, Brennan, Kil and as applied to claim 1 above, and further in view of Charlap (USPGPUB: 20170300648, hereinafter “Charlap”). Regarding dependents claim 2, depends on claim 1, Rogers does not explicitly teach: wherein the processor is further configured to translate natural language user instructions into a machine representation system language. However, Charlap teaches: wherein the processor is further configured to translate the first, second, third and fourth natural language user instructions into a machine representation system language. (Charlap − [0044] when the action to be taken involves a verbal response by the VA, the response is provided to the natural language generation (NLG) module 130 to convert the machine response into a natural language response. The NLG 130 generates an output indicating a realized action, e.g., having the VA speaks and/or presents on the screen forms or menu for the user's response.) Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combine Rogers, Michalak, Brennan, Kil, Simard and Charlap to provide a system for training model(s) and testing the trained model(s), with a reasonable expectation of success. The motivation to combines provides the benefit to improve model accuracy by identifying errors and providing recommendation to improve model accuracy. Claim(s) 24-27 are rejected under 35 U.S.C. 103 as being unpatentable over Rogers, Michalak, Brennan, Kil and Simard as applied to claim 1 and 15 above, and further in view of Mohajer (US PGPUB: 20190043493 A1, Filed Date: Aug. 7, 2017). Regarding dependents claim 24, depends on claim 1, Rogers does not explicitly teach: wherein, to suggest the first cognitive algorithm for the user based on characteristics of the first dataset, the processor is configured to execute the program code to suggest a classification algorithm in response to determining that the first dataset has binary labels. However, Mohajer teaches: wherein, to suggest the first cognitive algorithm for the user based on characteristics of the first dataset, the processor is configured to execute the program code to suggest a classification algorithm in response to determining that the first dataset has binary labels. (Mohajer − [0113] Recommendation engine 118 performs regression for parameters having enumerable values. Enumerable values are binary labels.) Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combine Rogers, Michalak, Brennan, Kil, Simard and Mohajer to provide a system for training model(s) and testing the trained model(s), with a reasonable expectation of success. The motivation to combines provides the benefit to improve model accuracy by identifying errors and providing recommendation to improve model accuracy. Regarding dependents claim 25, depends on claim 1, Rogers does not explicitly teach: wherein, to suggest the first cognitive algorithm for the user based on characteristics of the first dataset, the processor is configured to execute the program code to suggest a regression algorithm upon in response to determining that the first datasets has continuous labels. However, Mohajer teaches: wherein, to suggest the first cognitive algorithm for the user based on characteristics of the first dataset, the processor is configured to execute the program code to suggest a regression algorithm upon in response to determining that the first datasets has continuous labels.(Mohajer − [0113] Recommendation engine 118 performs regression for parameters having continuous values. Continuous values are continuous labels.) Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combine Rogers, Michalak, Brennan, Kil, Simard and Mohajer to provide a system for training model(s) and testing the trained model(s), with a reasonable expectation of success. The motivation to combines provides the benefit to improve model accuracy by identifying errors and providing recommendation to improve model accuracy. Regarding dependents claim 26, depends on claim 15, Rogers does not explicitly teach: wherein suggesting the first cognitive algorithm for the user based on characteristics of the first dataset comprises suggesting a classification algorithm in response to determining that the first dataset has binary labels. However, Mohajer teaches: wherein suggesting the first cognitive algorithm for the user based on characteristics of the first dataset comprises suggesting a classification algorithm in response to determining that the first dataset has binary labels. (Mohajer − [0113] Recommendation engine 118 performs regression for parameters having enumerable values. Enumerable values are binary labels.) Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combine Rogers, Michalak, Brennan, Kil, Simard and Mohajer to provide a system for training model(s) and testing the trained model(s), with a reasonable expectation of success. The motivation to combines provides the benefit to improve model accuracy by identifying errors and providing recommendation to improve model accuracy. Regarding dependents claim 27, depends on claim 15, Rogers does not explicitly teach: wherein suggesting the first cognitive algorithm for the user based on characteristics of the first dataset comprises suggesting a regression algorithm in response to determining that the first datasets has continuous labels. However, Mohajer teaches: wherein suggesting the first cognitive algorithm for the user based on characteristics of the first dataset comprises suggesting a regression algorithm in response to determining that the first datasets has continuous labels. (Mohajer − [0113] Recommendation engine 118 performs regression for parameters having continuous values. Continuous values are continuous labels.) Accordingly, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have combine Rogers, Michalak, Brennan, Kil, Simard and Mohajer to provide a system for training model(s) and testing the trained model(s), with a reasonable expectation of success. The motivation to combines provides the benefit to improve model accuracy by identifying errors and providing recommendation to improve model accuracy. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CARL E BARNES JR whose telephone number is (571)270-3395. The examiner can normally be reached Monday-Friday 9am-6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Hong can be reached at (571) 272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CARL E BARNES JR/Examiner, Art Unit 2178 /STEPHEN S HONG/Supervisory Patent Examiner, Art Unit 2178
Read full office action

Prosecution Timeline

Jul 19, 2018
Application Filed
Mar 23, 2022
Non-Final Rejection — §103, §112, §DP
Jun 24, 2022
Response Filed
Oct 12, 2022
Final Rejection — §103, §112, §DP
Dec 14, 2022
Applicant Interview (Telephonic)
Dec 14, 2022
Examiner Interview Summary
Dec 19, 2022
Response after Non-Final Action
Jan 19, 2023
Examiner Interview (Telephonic)
Jan 19, 2023
Response after Non-Final Action
Feb 23, 2023
Request for Continued Examination
Feb 25, 2023
Response after Non-Final Action
Aug 04, 2023
Non-Final Rejection — §103, §112, §DP
Nov 03, 2023
Applicant Interview (Telephonic)
Nov 06, 2023
Examiner Interview Summary
Nov 09, 2023
Response Filed
Feb 12, 2024
Final Rejection — §103, §112, §DP
Mar 18, 2024
Interview Requested
Apr 11, 2024
Applicant Interview (Telephonic)
Apr 16, 2024
Response after Non-Final Action
Apr 16, 2024
Examiner Interview Summary
May 01, 2024
Response after Non-Final Action
May 10, 2024
Request for Continued Examination
May 16, 2024
Response after Non-Final Action
Nov 07, 2024
Non-Final Rejection — §103, §112, §DP
Jan 28, 2025
Examiner Interview Summary
Jan 28, 2025
Applicant Interview (Telephonic)
Feb 14, 2025
Response Filed
Mar 13, 2025
Final Rejection — §103, §112, §DP
May 25, 2025
Applicant Interview (Telephonic)
May 27, 2025
Response after Non-Final Action
May 29, 2025
Examiner Interview Summary
Jun 25, 2025
Request for Continued Examination
Jul 01, 2025
Response after Non-Final Action
Sep 15, 2025
Non-Final Rejection — §103, §112, §DP
Dec 16, 2025
Applicant Interview (Telephonic)
Dec 17, 2025
Examiner Interview Summary
Dec 18, 2025
Response Filed
Feb 21, 2026
Final Rejection — §103, §112, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12584932
SLIDE IMAGING APPARATUS AND A METHOD FOR IMAGING A SLIDE
2y 5m to grant Granted Mar 24, 2026
Patent 12541640
COMPUTING DEVICE FOR MULTIPLE CELL LINKING
2y 5m to grant Granted Feb 03, 2026
Patent 12536464
SYSTEM FOR CONSTRUCTING EFFECTIVE MACHINE-LEARNING PIPELINES WITH OPTIMIZED OUTCOMES
2y 5m to grant Granted Jan 27, 2026
Patent 12530765
SYSTEMS AND METHODS FOR CALCIUM-FREE COMPUTED TOMOGRAPHY ANGIOGRAPHY
2y 5m to grant Granted Jan 20, 2026
Patent 12530523
METHOD, APPARATUS, SYSTEM, AND COMPUTER PROGRAM FOR CORRECTING TABLE COORDINATE INFORMATION
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

9-10
Expected OA Rounds
32%
Grant Probability
57%
With Interview (+25.2%)
4y 4m
Median Time to Grant
High
PTA Risk
Based on 202 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month