Prosecution Insights
Last updated: April 19, 2026
Application No. 17/273,762

DATA ANALYZER

Final Rejection §103
Filed
Mar 05, 2021
Examiner
KHAN, SHAHID K
Art Unit
2146
Tech Center
2100 — Computer Architecture & Software
Assignee
Shimadzu Corporation
OA Round
4 (Final)
74%
Grant Probability
Favorable
5-6
OA Rounds
2y 11m
To Grant
90%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
287 granted / 389 resolved
+18.8% vs TC avg
Strong +16% interview lift
Without
With
+15.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
31 currently pending
Career history
420
Total Applications
across all art units

Statute-Specific Performance

§101
10.0%
-30.0% vs TC avg
§103
55.7%
+15.7% vs TC avg
§102
16.5%
-23.5% vs TC avg
§112
15.2%
-24.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 389 resolved cases

Office Action

§103
DETAILED ACTION This communication is in response to the amendment filed 11/25/25 in which claims 1-3 and 6-16 were amended, and claim 17 was newly presented. Claims 1-3 and 6-17 are pending. Claims 4-5 were previously canceled. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to claim 1 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claims 1, 7, and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Yu (US 9,607,272 B1; patented Mar. 28, 2017) in view of Xu (2020/0027157 A1; published Jan. 23, 2020). Regarding claim 1, Yu discloses [a] data analysis device that constructs a machine learning model based on pieces of labeled teacher data for a plurality of sample data and identifies and labels an unknown sample data using the machine learning model, the data analysis device comprising: (Yu 3:42-57 (“The training documents may then be used to train a classification model for the predictive coding system. Once the classification model has been trained (e.g., to generate a first trained classification model), the effectiveness of the predictive coding system can be determined for a set of validation documents selected from the corpus of electronic discovery documents. The effectiveness of the predictive coding system can be based on the quality of the trained classification model once the classification model has been trained, and can be determined by comparing a predictive coding system classification for each validation document and a user classification for each validation document. Therefore, the quality of the training documents is crucial to the quality and effectiveness of the trained classification model and the effectiveness of the predictive coding system that uses the trained classification model.”)) a memory; and a processor, (Yu 16:10-18 (“The exemplary computer system 500 includes a processing device (processor) 502, a main memory 504 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM), double data rate (DDR SDRAM), or DRAM (RDRAM), etc.), a static memory 506 (e.g., flash memory, static random access memory (SRAM), etc.), and a data storage device 518, which communicate with each other via a bus 530.”)) wherein the memory stores pieces of teacher data to which a correct label is labeled, the pieces of teacher data including first model construction data, second model construction data, and model verification data, the model verification data being labeled with a first label, (Yu 4:42-54 (“The data in the electronic discovery documents data repository 120 can include a corpus of electronic discovery documents that need to be reviewed and classified. Examples of electronic discovery documents can include, and are not limited to, electronic discovery documents which have been divided into a set of training documents that have been selected by an administrator (document reviewer, etc.), a set of validation documents that have been selected by an administrator (document reviewer, etc.), an unlabeled remainder of electronic discovery documents that need to be classified or labeled, and any other electronically stored information that can be associated with electronic discovery documents, etc.”)) wherein the processor performs: a first process of constructing the machine learning model using the first model construction data; (Yu 5:7-36 (“During operation of system 100, a predictive coding system 110 can train a (untrained) classification model 140, to generate a (first) trained classification model 145. To train the classification model 140, an initial training set of documents is needed by the predictive coding system 110. To generate the initial training set, the predictive coding system defines a set of query/search terms based on a topic of interest. The topic of interest can be provided by a user or administrator of the predictive coding system 110. The predictive coding system 110 can perform a search (keyword search and/or concept search) with the (stemmed) terms on the electronic discovery documents in electronic discovery documents data repository 120 and can return documents based on the search. In one embodiment, the predictive coding system 110 selects all documents returned by the search as training documents. In an alternate embodiment, the predictive coding system 110 selects a predetermined number of documents returned by the search as the training documents. For example, the predictive coding system 110 can select 1000 random documents from the documents returned by the search as training documents. The predictive coding system 110 can cause a user interface to be presented to an administrator or reviewer via client device 102A-102N. The user interface can present the training documents to the administrator or reviewer and request one or more inputs from the administrator or reviewer on the client device 102A-102N over network 104, such as a label or a classification for each training document (e.g., confidential, not confidential, relevant, not relevant, privileged, not privileged, responsive, not responsive, etc.).”)) a second process of applying the machine learning model to the model verification data to label an estimated label; (Yu 5:45-52 (“The validation documents 160 can include a set of validation documents used to validate the trained classification model 145 in the predictive coding system 110. The trained classification model 145 can classify the set of validation documents in the predictive coding system 110. The predictive coding system 110 can further present the set of validation documents to an administrator or reviewer via client device 102A-102N over network 104.”)) a third process of retraining the machine learning model using the second model construction data; and (Yu 5:66-6:4 (“In one embodiment, the predictive coding system 110 includes a training data generation module 130. The training data generation module 130 can incrementally enhance the training set for predictive coding in one or more iterations, and retrain the classification model with the enhanced training set in each iteration.”); Yu 6:58-62 (“In one embodiment, the predictive coding system 110 retrains the classification model 140 based on only the set of updated training documents in training documents 150, and may not be based on any previous training of the classification model 140. For example, if the classification model was previously trained using documents A1, . . . , A1000, and the updated set of training documents contains documents A1, . . . , A1000 and B1, . . . , B100, the classification model is retrained using documents A1, . . . , A1000 and B1, . . . , B100, without the use of any previous version of the trained classification model that was built using documents A1, . . . , A1000.”)) a fourth process of repeating the second process after the third process, wherein a combination of the third process and the fourth process is performed at least once, (Yu 17-36 (“If the training data generation module 130 determines that the classification model 140 should be retrained, the training data generation module 130 can generate training data from a subset of the unlabeled documents in unlabeled documents 170 and provide the training data to the predictive coding system 110 to cause the classification model 140 to be retrained by generating a new trained classification model 145 (e.g., a second trained classification model) that has an improved effectiveness than the previous trained classification model 145 (e.g., first trained classification model). In each iteration, the training data generation module 130 can generate the training data by selecting a predetermined number of additional documents from unlabeled documents 170 as training data. The training data generation module 130 can select each of the additional documents by randomly choosing a group of unlabeled documents from unlabeled documents 170, calculating a score for each document in the chosen group of unlabeled documents, and selecting the document in the chosen group of unlabeled documents with the lowest score.”)) wherein the processor counts a number of misidentifications in which the estimated label does not coincide with the first label, after the combination of the third process and the fourth process is performed at least once, and (Yu 6:4-8 (“The training data generation module 130 can determine an effectiveness of the trained classification model 145 and determine whether the classification model 140 (e.g., untrained classification model) should be retrained.”); Yu 10:40-43 (“For example, the effectiveness measure can be a precision of the trained classification model, a recall of the trained classification model, an F-measure of the trained classification model, etc.”); Yu 10:56-63 (“The precision for the trained classification model can be defined as: precision=TP/(TP+FP), where TP is the number of true positives in the set of validation documents, and FP is the number of false positives in the set of validation documents.”)). Yu does not expressly disclose wherein the processor determines that one of the plurality of the sample data is in a mislabeled state when the number of misidentifications, or a misidentification rate derived therefrom, is equal to or higher than a threshold (but see Xu ¶ 64 (“If the lead management system 102 determines that the scoring split of the original dataset meets the threshold (e.g., has an effective/accurate scoring split), the lead management system 102 can use the original dataset 304 for training/updating the lead scoring model 300. Meeting the split threshold can indicate that the reject rate of the original dataset is small with a small mislabel rate. For example, a small reject rate indicates that a small number of unengaged rejected leads are incorrectly labeled. Conversely, a large reject rate indicates that a large number of unengaged leads are incorrectly labeled due to the dataset containing a greater number of rejected leads.”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yu to incorporate the teachings of Xu to determine that training documents are incorrectly labeled if the number or rate of false positive (and/or false negative) classifications is greater than a threshold, at least because doing so would reduce bias in the trained model. See Xu ¶ 3. Regarding claim 7, Yu, in view of Xu, discloses the invention of claim 1 as discussed above. Yu further discloses wherein the processor uses a support vector machine as a machine learning technique (Yu 5:37-44 (“The predictive coding system 110 can add each of the documents labeled by the user to a set of training documents, such as training documents 150 in the electronic discovery documents data repository 120. The predictive coding system 110 can train an untrained classification model, such as classification model 140 (e.g., an SVM model) using the set of training documents in training documents 150 to generate a trained classification model 145.”)). Regarding claim 9, Yu, in view of Xu, discloses the invention of claim 1 as discussed above. Yu further discloses wherein the processor uses a linear discrimination method as a machine learning technique (Yu 5:37-44 (“The predictive coding system 110 can add each of the documents labeled by the user to a set of training documents, such as training documents 150 in the electronic discovery documents data repository 120. The predictive coding system 110 can train an untrained classification model, such as classification model 140 (e.g., an SVM model) using the set of training documents in training documents 150 to generate a trained classification model 145.”)). Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Yu and Xu as applied to claim 1 above, and further in view of Nicholson, Bryce, et al. "Label noise correction methods." 2015 IEEE International conference on data science and advanced analytics (DSAA). IEEE, 2015 (“Nicholson”). Regarding claim 6, Yu, in view of Xu, discloses the invention of claim 1 as discussed above. Yu does not expressly disclose wherein the processor uses random forest as a machine learning technique (but see Nicholson I Introduction (“Another group of methods that implicitly handle label noise, using classification methods that are robust to mislabeled data, are bagging, boosting, and random forests [8], Bayesian approaches—e.g., [9]—among many others”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yu to incorporate the teachings of Nicholson to use random forests as the classification model, at least because random forests are robust to mislabeled data. Claims 8 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Yu and Xu as applied to claim 1 above, and further in view of Ben-Hur (US 2010/0205124 A1; published Aug. 12, 2010). Regarding claim 8, Yu, in view of Xu, discloses the invention of claim 1 as discussed above. Yu does not expressly disclose wherein the processor uses a neural network as a machine learning technique (but see Ben-Hur ¶ 4 (“Machine-learning approaches, which include neural networks, hidden Markov models, belief networks and kernel-based classifiers such as support vector machines, are ideally suited for domains characterized by the existence of large amounts of data, noisy patterns and the absence of general theories.”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yu to incorporate the teachings of Ben-Hur to use a neural network as the classification model, at least because a neural network is a type of classification model like SVM. Regarding claim 10, Yu, in view of Xu, discloses the invention of claim 1 as discussed above. Yu does not expressly disclose wherein the processor uses a non-linear discrimination method as a machine learning technique (but see Ben-Hur ¶ 4 (“Machine-learning approaches, which include neural networks, hidden Markov models, belief networks and kernel-based classifiers such as support vector machines, are ideally suited for domains characterized by the existence of large amounts of data, noisy patterns and the absence of general theories.”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yu to incorporate the teachings of Ben-Hur to use a neural network as the classification model, at least because a neural network is a type of classification model like SVM. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Yu and Xu as applied to claim 1 above, and further in view of Karlov (US 2003/0065535 A1; published Apr. 3, 2003). Regarding claim 2, Yu, in view of Xu, discloses the invention of claim 1 as discussed above. Yu does not expressly disclose wherein the processor removes the one of the plurality of sample data determined to be in the mislabeled state from the pieces of teacher data to generate an updated pieces of teacher data, and performs the combination of the third process and the fourth process using the updated pieces of teacher data (but see Karlov ¶ 10 (“In another aspect of the invention, a method is provided for identifying a patient disease diagnosis that appears to be mislabeled. Each patient who has contributed a data record to the clinical data, including one or more test results, will be associated with a clinical disease diagnosis. The possibility of a mislabeling is identified when a data analysis such as described above is performed and a set of probability density functions (pdf) are produced that can provide a hypothesized disease diagnosis for each patient, as well as for new patients. This analysis can identify a patient to whom one or more of the tests was administered, but for whom the disease diagnosis predicted by the inventive method is different from the clinical diagnosis assigned to that patient. If a clinical diagnosis is determined to be mislabeled, that patient's data record can be removed from consideration in performing a future iteration of the estimation technique in accordance with the invention.”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yu and Xu to incorporate the teachings of Karlov to remove documents from the training data set if it is determined that the documents were mislabeled, at least because doing so would improve the accuracy of the retrained model. Claims 11-15 are rejected under 35 U.S.C. 103 as being unpatentable over Yu and Xu as applied to claim 1 above, and further in view of Miranda, André LB, et al. "Use of classification algorithms in noise detection and elimination" Hybrid Artificial Intelligence Systems: 4th International Conference, HAIS 2009, Salamanca, Spain, June 10-12, 2009, Proceedings 4 Springer Berlin Heidelberg, 2009 (“Miranda”). Regarding claim 11, Yu, in view of Xu, discloses the invention of claim 1 as discussed above. Yu does not expressly disclose wherein the processor determines that one of the pieces of teacher data having a highest misidentification rate is incorrect (but see Miranda pp. 317-319 (an ensemble method for noise elimination in classification problems in which an instance is removed from a training set if it cannot be classified correctly by all (i.e., 100%), or the majority of, the classifiers built on parts of the training set)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yu and Xu to incorporate the teachings of Miranda to identify mislabeled instances based on the false positive (or false negative) rate, at least because doing so would most improve the accuracy of the classification model. Regarding claim 12, Yu, in view of Xu, discloses the invention of claim 1 as discussed above. Yu does not expressly disclose wherein the processor determines that pieces of the teacher data as many as a number specified by a user in descending order of a misidentification rate are incorrect (but see Miranda pp. 317-319 (an ensemble method for noise elimination in classification problems in which an instance is removed from a training set if it cannot be classified correctly by all (i.e., 100%), or the majority of, the classifiers built on parts of the training set)). Yu is combinable with Miranda for the same reasons as set forth above. Regarding claim 13, Yu, in view of Xu, discloses the invention of claim 1 as discussed above. Yu does not expressly disclose wherein the processor determines that one of the pieces of teacher data having a misidentification rate of 100% is incorrect (but see Miranda pp. 317-319 (an ensemble method for noise elimination in classification problems in which an instance is removed from a training set if it cannot be classified correctly by all (i.e., 100%), or the majority of, the classifiers built on parts of the training set)). Yu is combinable with Miranda for the same reasons as set forth above. Regarding claim 14, Yu, in view of Xu, discloses the invention of claim 1 as discussed above. Yu does not expressly disclose wherein the processor determines that one of the pieces of teacher data whose misidentification rate is equal to or higher than a threshold set by a user is incorrect (but see Miranda pp. 317-319 (an ensemble method for noise elimination in classification problems in which an instance is removed from a training set if it cannot be classified correctly by all (i.e., 100%), or the majority of, the classifiers built on parts of the training set)). Yu is combinable with Miranda for the same reasons as set forth above. Regarding claim 15, Yu, in view of Xu and Miranda, discloses the invention of claim 2 as discussed above. Yu does not expressly disclose wherein generating the updated teacher data is repeated until a misidentification rate becomes equal to or lower than a predetermined threshold (but see Miranda p. 420 (New classifiers are then trained using the new training data sets, and their accuracies are again evaluated using the validation folds. If the new performance recorded is better than that obtained previously, the preprocessing cycle is repeated. Pre-processing stops when a performance degradation occurs.)). Yu is combinable with Miranda for the same reasons as set forth above. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Yu and Xu as applied to claim 1 above, and further in view of Zhang, Y., Noise Tolerant Data Mining (2008) (Ph.D. dissertation, The University of Vermont), available at https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=56948dc4ff87bdc4605d5eb4d06085be86aba12b (“Zhang”). Regarding claim 3, Yu, in view of Xu, discloses the invention of claim 1 as discussed above. Yu does not expressly disclose wherein the first model construction data, the second model construction data, and the model verification data are obtained by dividing the pieces of teacher data in random (but see Zhang pp. 16-17 (a well-known method for handling noise is to detect and discard the instances which are subject to noise according to certain evaluation methods: “The essential idea is using m learning algorithms to filter out the instances that are prone to labeling errors. The method first splits the training data into n parts, like an n-fold cross-validation. For each of the n parts, the m filtering algorithms are trained on the other n−1 parts. The m resulting classifiers are then used to predict on instances in the excluded part, and finally decide whether the instances are correctly labeled or not. All of the instances identified as mislabeled are removed, and the filtered set of training instances is provided as the input to the final learning algorithm.”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yu to incorporate the teachings of Zhang to prune the mislabeled data and retrain/retest the model using the pruned data, at least because doing so would improve the accuracy of the classifier. Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Yu and Xu as applied to claim 1 above, and further in view of Drucker (US 2011/0307422 A1; published Dec. 15, 2011). Regarding claim 16, Yu, in view of Xu, discloses the invention of claim 1 as discussed above. Yu does not expressly disclose wherein the processor creates a table or a graph based on an identification result and displays the table or graph on a display (but see Drucker ¶ 48 (“FIG. 6 illustrates a graph 600 plotting incorrectness versus entropy using a plurality of data points. It can be seen from FIG. 6 that some of the data falls within the canonical area 510, some data falls within the unsure area 520, and some data falls within the confused area 530.”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yu to incorporate the teachings of Drucker to create a graph plotting incorrectness versus entropy of the data points, at least because doing so would enable a user to obtain information from the graph based on where the data falls on the graph. Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Yu and Xu as applied to claim 1 above, and further in view of Lam (US 2002/0107712 A1; published Aug. 8, 2002). Regarding claim 17, Yu, in view of Xu, discloses the invention of claim 1 as discussed above. Yu does not expressly disclose wherein upon determining that any one of the plurality of sample data is mislabeled, the processor either (i) removes corresponding model verification data from the pieces of teacher data or (ii) relabels the first label (but see Lam ¶ 117 (“When validation shows that results don't meet expectations, there are three immediate options for correction. The simple option, but possibly adequate, is to get more data. Another option is to fix mislabeled data on hand, retaining the existing categorization scheme. The option that takes more cognitive effort is to rethink the categorization scheme in the light of the technology, keeping in mind the feasibility of relabeling the data.”)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yu and Xu to incorporate the teachings of Lam to fix the mislabeled data, at least because doing so would ensure the adequacy of the classification model in light of the latest training data. See Lam ¶ 32. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAHID KHAN whose telephone number is (571)270-0419. The examiner can normally be reached M-F, 9-5 est. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Usmaan Saeed can be reached at (571)272-4046. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHAHID K KHAN/Primary Examiner, Art Unit 2146
Read full office action

Prosecution Timeline

Mar 05, 2021
Application Filed
May 17, 2024
Non-Final Rejection — §103
Aug 13, 2024
Response Filed
Nov 06, 2024
Final Rejection — §103
May 16, 2025
Response after Non-Final Action
Jun 23, 2025
Request for Continued Examination
Jul 16, 2025
Response after Non-Final Action
Jul 26, 2025
Non-Final Rejection — §103
Nov 25, 2025
Response Filed
Mar 13, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591768
DEEP LEARNING ACCELERATION WITH MIXED PRECISION
2y 5m to grant Granted Mar 31, 2026
Patent 12579516
System and Method for Organizing and Designing Comment
2y 5m to grant Granted Mar 17, 2026
Patent 12566813
SYSTEMS AND METHODS FOR RENDERING INTERACTIVE WEB PAGES
2y 5m to grant Granted Mar 03, 2026
Patent 12547298
Display Method and Electronic Device
2y 5m to grant Granted Feb 10, 2026
Patent 12530916
MULTIMODAL MULTITASK MACHINE LEARNING SYSTEM FOR DOCUMENT INTELLIGENCE TASKS
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
74%
Grant Probability
90%
With Interview (+15.7%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 389 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month