Prosecution Insights
Last updated: April 19, 2026
Application No. 17/077,114

METHOD OF MACHINE-LEARNING BY COLLECTING FEATURES OF DATA AND APPARATUS THEREOF

Final Rejection §103
Filed
Oct 22, 2020
Examiner
LEY, SALLY THI
Art Unit
2147
Tech Center
2100 — Computer Architecture & Software
Assignee
LUNIT INC.
OA Round
6 (Final)
15%
Grant Probability
At Risk
7-8
OA Rounds
3y 10m
To Grant
44%
With Interview

Examiner Intelligence

Grants only 15% of cases
15%
Career Allow Rate
5 granted / 33 resolved
-39.8% vs TC avg
Strong +29% interview lift
Without
With
+28.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
35 currently pending
Career history
68
Total Applications
across all art units

Statute-Specific Performance

§101
29.2%
-10.8% vs TC avg
§103
50.2%
+10.2% vs TC avg
§102
10.8%
-29.2% vs TC avg
§112
9.8%
-30.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 33 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This Office Action is in response to the communication filed on 24 Nov 2025. Claims 21-22, 24, 26, 28-30, 32, 34, 36, 38, 40, and 43-44 are being considered on the merits. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 21-22, 24, 26, 28-30, 32, 34, 36, 38, 40, and 43-44 are rejected under 35 U.S.C. 103 as being unpatentable over Szeto, et. al. (US 2018/0018590 A1; hereinafter, “Szeto”), in view of Holtham, Elliot Mark (US 2018/0247227 A1; hereinafter, “Holtham”) Regarding claim 21 and 29, Szeto teaches: A method for operating an artificial intelligence (Al) model by a computing device, (Szeto, para. 0017: “For the purposes of this application, it is understood that the term “machine learning” refers to artificial intelligence systems configured to learn from data without being explicitly programmed. Such systems are understood to be necessarily rooted in computer technology, and in fact, cannot be implemented or even exist in the absence of computing technology.”) the method comprising: obtaining a plurality of sub-Al models (Szeto, para. 127: “The modeling engine can receive local training data in the form of original images along with other training information (e.g., annotations, classifications, scene descriptions, locations, time, settings, camera orientations, etc.) as defined according to the modeling instructions. The modeling engine can then create local trained actual models from the original images and training information.”) that correspond to a plurality of medical institutes, respectively, wherein the plurality of sub-Al models respectively are trained using medical data stored at a corresponding medical institute; (Szeto, para. 0043 and Fig. 1: “For example, if a medical institute has multiple locations and/or affiliations, e.g., a main hospital, physician offices, clinics, a secondary hospital, a hospital affiliation, each of these entities could have their own private data 122, private data server 124 and modeling engine 126, which may all be visible to each other, but not to a different entity.”) generating an integrated Al model by using at least one sub-Al models selected from among the plurality of sub-Al models; (Szeto, para. 0019: “a central server or global modeling engine, for example, that can integrate the salient private data features with other data sets to create an aggregated model.”) configuring the integrated Al model as a plurality of first Al models that correspond to the plurality of medical institutes, respectively, each first Al model configured to analyze medical data stored at a corresponding medical institute; (Szeto, para. 0053 and Fig. 2: “FIG. 2 is an illustration of an example architecture including private data server 224 within an entity 220 with respect to its machine learning activities. The example presented in FIG. 2 illustrates the inventive concepts from the perspective of how private data server 224 interacts with a remote computing device and private data 222. In more preferred embodiments, private data 222 comprises local private healthcare data, or more specifically includes patient-specific data (e.g., name, SSN, normal WGS, tumor WGS, genomic diff objects, a patient identifier, etc.). Entity 220 typically is an institution having private local raw data and subject to restrictions as discussed above. Example entities include hospitals, labs, clinics, pharmacies, insurance companies, oncologist offices, or other entities having locally stored data.”) obtaining feature data (Szeto, para. 0074: “Proxy data 260 can be considered synthetic data randomly generated, in some cases deterministically generated, that retains the learnable salient features (i.e., knowledge) of the training data while eliminating the references to real information stored in private data 222.”) from the plurality of first Al models (Szeto, para. 0099: “Operation 510 begins by configuring a private data server operating as a modeling engine to receive model instructions (e.g., from a private data server 124 or from central/global server 130) to create a trained actual model 240 from at least some local private data and according to an implementation of at least one machine learning algorithm.”) is de-identified through lossy compression performed by the at least one layer of the plurality of layers, and (Szeto, para. 0076: “In some aspects, the trained machine learning model and proxy data generation can be considered as a form of lossy compression. Similar to lossy compression, transformation of original data into proxy data preserves key characteristics of the data, but does not retain granularity with regard to individual patients.”) is generated based on at least one sub- feature data obtained from the plurality of sub-Al models; (Szeto, para. 0019: “From the private data distributions, the machine learning engine can identify or otherwise calculate one or more salient private data features that describe the nature of the private data distributions. Depending upon the type of distribution, example features could include sample data, a mean, a mode, an average, a width, a half-life, a slope, a moment, a histogram, higher order moments, or other types of features. In some, more specific embodiments, the salient private data features could include proxy data. Once the salient features are available, the machine learning engine transmits the salient private data features over a network to a non-private computing device; a central server or global modeling engine, for example, that can integrate the salient private data features with other data sets to create an aggregated model. Thus, multiple private peers are able to share their learned knowledge without exposing their private data.”) obtaining analysis result data for input medical data from the plurality of first Al models or the plurality of medical institutes, (Szeto, para. 0082 and 0101: “With respect to health care, private data 322 could include one or more of the following types of data, including but not limited to: genomic data, whole genome sequence data, whole exosome sequence data, proteomic data, neoepitope data, RNA data, allergy information, encounter data, treatment data, outcome data, appointment data, order data, billing code data, diagnosis code data, results data, demographic data, medication data, vital sign data, payor data, drug study data, drug response data, longitudinal study data, biometric data, financial data, proprietary data, electronic medical record data, research data, human capital data, performance data, analysis results data, event data, or other types of data.” “Operation 530 includes generating one or more private data distributions from the local private data training sets where the private data distributions represent the training set in aggregate used to create the trained actual model.” Examiner notes Szeto teaches generating data distributions for the data training sets of each of the AI models) training a second Al model (Szeto, para. 0101: “ Operation 530 includes generating one or more private data distributions from the local private data training sets where the private data distributions represent the training set in aggregate used to create the trained actual model.” Examiner notes that Szeto teaches creating a trained model) using a training data set including the feature data and the analysis result data, (Szeto, para. 0102: “The modeling engine leverages the private data distributions as probability distributions from which it is able to generate the proxy data. The modeling engine can generate new proxy data samples by randomly generating new data according to the probability distributions. The modeling engine can compare each sample to where it falls within each of the relevant probability distributions to ensure the sample adheres to the nature of the actual data.”) wherein the second Al model (Szeto, para. 0101: “Operation 530 includes generating one or more private data distributions from the local private data training sets where the private data distributions represent the training set in aggregate used to create the trained actual model.” Examiner notes that Szeto teaches creating a trained model) is trained to receive the feature data as input (Szeto, para. 0045: “As each modeling engine 126 gains new learned information, the new knowledge is transmitted back to the researcher at non-private computing device 130 once transmission criteria have been met. The new knowledge can then be aggregated into a trained global model via global modeling engine 136. Examples of knowledge include (see, e.g., FIG. 2) but are not limited to proxy data 260, trained actual models 240, trained proxy models 270, proxy model parameters, model similarity scores, or other types of data that have been de-identified.”) and generate the analysis result data as output. (Szeto, para. 0102: “Operation 540 includes generating a set of proxy data according to one or more of the private data distributions. The modeling engine leverages the private data distributions as probability distributions from which it is able to generate the proxy data. The modeling engine can generate new proxy data samples by randomly generating new data according to the probability distributions. The modeling engine can compare each sample to where it falls within each of the relevant probability distributions to ensure the sample adheres to the nature of the actual data. Operation 540 can be conducted multiple times or iterated to ensure that the proxy data, in aggregate, generates the proper shapes in the same distribution space.” Examiner notes that the second AI model is trained to generate the same distribution space) wherein the generating the integrated Al model comprises selecting at least two sub-Al models having a similarity being larger than or equal to a predetermined threshold from among the plurality of sub-Al models; and (Szeto, para. 0018: “The modeling engine calculates a similarity score that indicates how similar the trained actual model and the proxy model are to each other as a function of the proxy model parameters and the actual model parameters. Based on the similarity score, the modeling engine can transmit one or more pieces of information related to the trained model, possibly including the set of proxy data or information sufficient to recreate the proxy data, actual model parameters, proxy model parameters, or other features. For example, if the model similarity satisfies a similarity requirement (e.g., compared to a threshold value, etc.), the modeling engine can transmit the set of proxy data to a non-private computing device, which in turn integrates the proxy data in to an aggregated model.”) setting one of the at least two sub-Al models as the integrated Al model. (Szeto, para. 0112 and fig 6: “Operations 610, 620, and 630 taken by a modeling engine in a private data server are the same as operations 510, 520, and 530 taken by the modeling engine. Method 600 substantially departs from method 500 at operation 640, while initially still focused on the activity of the modeling engine deployed within an entity's private data server. Method 600 seeks to permit remote, non-private computing devices to create global models from the data distributions representative of the local private data from private entities.”) Szeto does not explicitly disclose: wherein the feature data is extracted from at least one layer of a plurality of layers included in each first Al model, wherein the analysis result data is obtained from an output layer connected to a final layer among the plurality of layers or a diagnosis result made by a medical professional; and However, Holtham teaches: wherein the feature data is extracted from at least one layer of a plurality of layers included in each first Al model, and (Holtham, para. 0063: “Systems and processes for training machine learning models to perform file matching will now be described with reference to FIGS. 11-13. A block diagram showing modules, inputs and outputs of one embodiment of the system is shown in FIG. 11. Input training documents and or files 1100 typically include documents such as scanned or digital PDF's of receipts and invoices or medical records. FIG. 11 depicts example inputs of paper documents to illustrate the key modules and components of the system, however it will be appreciated that the disclosed systems and techniques can operate on digitized paper documents or purely digital documents. The extract features module 1102 selects the important defining features of the documents, files or images/videos.”) wherein the analysis result data is obtained (Holtham, para. 0028: “Once the network parameters have been determined, they can be used by predictor 108 to process either compressed (at any compression level) or non-compressed prediction inputs 112 to produce the output prediction results 110.”) from an output layer connected to a final layer among the plurality of layers or a diagnosis result made by a medical professional; and (Holtham, para. 0024: “Artificial neural networks are used to model complex relationships between inputs and outputs or to find patterns in data, where the dependency between the inputs and the outputs cannot be easily ascertained. A neural network typically includes an input layer, one or more intermediate (“hidden”) layers, and an output layer, with each layer including a number of nodes”) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Holtham into Szeto. Szeto teaches a distributed, online machine learning system including many private data servers, each having local private data; Holtham teaches systems and methods for improving the operation of computer-implemented neural networks. One of ordinary skill would have been motivated to combine the teachings of Holtham into Szeto in order to increase prediction accuracy and be used to augment and balance datasets (Holtham, para. 0007) Regarding claim 22 and 30, Szeto as modified teaches claims 21 and 29 above. Szeto further teaches: wherein the plurality of sub-AI models respectively are trained (Szeto, para. 0045 and Fig 1: “In the ecosystem/system presented in FIG. 1, the issues associated with privacy restrictions of private data 122 are addressed by focusing on the knowledge gained from a trained machine learning algorithm rather than the raw data itself. Rather than requesting raw data from each of entity 120, the researcher is able to define a desired machine learning model that he/she wishes to create. The researcher may interface with system 100 through the non-private computing device 130; through one of the private data servers 124, provided that the researcher has been granted access to the private data server; or through a device external to system 100 that can interface with non-private computing device 130. The programmatic model instructions on how to create the desired model are then submitted to each relevant private data server 124, which also has a corresponding modeling engine 126 (i.e., 126A through 126N). Each local modeling engine 126 accesses its own local private data 122 and creates local trained models according to model instructions created by the researcher.”) using different medical data obtained in different environments. (Szeto, para. 0043 and Fig. 1: “For example, if a medical institute has multiple locations and/or affiliations, e.g., a main hospital, physician offices, clinics, a secondary hospital, a hospital affiliation, each of these entities could have their own private data 122, private data server 124 and modeling engine 126, which may all be visible to each other, but not to a different entity.”) claim 24 and 32, Szeto as modified teaches claims 22 and 30 above. Szeto further teaches: wherein the different medical data (Szeto, para. 0043 and Fig. 1: “For example, if a medical institute has multiple locations and/or affiliations, e.g., a main hospital, physician offices, clinics, a secondary hospital, a hospital affiliation, each of these entities could have their own private data 122, private data server 124 and modeling engine 126, which may all be visible to each other, but not to a different entity.”) obtained in different environments are related to medical images obtained in different imaging environments. (Szeto, para. 0128: “For example, it should be appreciated that the private image collections could reside on a computer or data storage facility associated with or in a number of different physician's offices, medical imaging facilities, or clinical/pathology laboratories, typically in geographically distinct locations (e.g., different communities, cities, ZIP codes, states, etc.). In such case, the image collections would comprise of various scans (e.g., PET, SPECT, CT, fMRI, etc.) that would be associated with specific patients and their respective diagnostic and treatment histories. Or images could comprise tissue sections (typically stained with a dye, fluorophore, or otherwise optically detectable entity) or immunohistochemically treated sections associated with relevant patient information. Yet further contemplated images will include sonographic images (e.g., 2D, 3D, doppler) or videos, or angiographic images or videos, again associated with relevant patient information.”) claim 26 and 34, Szeto as modified teaches claims 21 and 29 above. Szeto further teaches: wherein the feature data (Szeto, para. 0074: “Proxy data 260 can be considered synthetic data randomly generated, in some cases deterministically generated, that retains the learnable salient features (i.e., knowledge) of the training data while eliminating the references to real information stored in private data 222.”) is extracted based on a Convolution Neural Network. (Szeto, para. 0069: “More specifically, machine learning algorithms 295 can include implementations of one or more of the following algorithms: a support vector machine, a decision tree, a nearest neighbor algorithm, a random forest, a ridge regression, a Lasso algorithm, a k-means clustering algorithm, a boosting algorithm, a spectral clustering algorithm, a mean shift clustering algorithm, a non-negative matrix factorization algorithm, an elastic net algorithm, a Bayesian classifier algorithm, a RANSAC algorithm, an orthogonal matching pursuit algorithm, bootstrap aggregating, temporal difference learning, backpropagation, online machine learning, Q-learning, stochastic gradient descent, least squares regression, logistic regression, ordinary least squares regression (OLSR), linear regression, stepwise regression, multivariate adaptive regression splines (MARS), locally estimated scatterplot smoothing (LOESS) ensemble methods, clustering algorithms, centroid based algorithms, principal component analysis (PCA), singular value decomposition, independent component analysis, k nearest neighbors (kNN), learning vector quantization (LVQ), self-organizing map (SOM), locally weighted learning (LWL), apriori algorithms, eclat algorithms, regularization algorithms, ridge regression, least absolute shrinkage and selection operator (LASSO), elastic net, classification and regression tree (CART), iterative dichotomiser 3 (ID3), C4.5 and C5.0, chi-squared automatic interaction detection (CHAID), decision stump, M5, conditional decision trees, least-angle regression (LARS), naive bayes, gaussian naive bayes, multinomial naive bayes, averaged one-dependence estimators (AODE), bayesian belief network (BBN), bayesian network (BN), k-medians, expectation maximisation (EM), hierarchical clustering, perceptron back-propagation, hopfield network, radial basis function network (RBFN), deep boltzmann machine (DBM), deep belief networks (DBN), convolutional neural network (CNN), stacked auto-encoders, principal component regression (PCR), partial least squares regression (PLSR), sammon mapping, multidimensional scaling (MDS), projection pursuit, linear discriminant analysis (LDA), mixture discriminant analysis (MDA), quadratic discriminant analysis (QDA), flexible discriminant analysis (FDA), bootstrapped aggregation (bagging), adaboost, stacked generalization (blending), gradient boosting machines (GBM), gradient boosted regression trees (GBRT), random forest, or even algorithms yet to be invented.”) claim 28 and 36, Szeto, as modified teaches claims 21 and 29 above. Szeto further teaches: wherein the feature data (Szeto, para. 0074: “Proxy data 260 can be considered synthetic data randomly generated, in some cases deterministically generated, that retains the learnable salient features (i.e., knowledge) of the training data while eliminating the references to real information stored in private data 222.”) is obtained from at least one layer with same position in the plurality of layers included in each first Al model. (Szeto, para. 0094: “It should be appreciated that proxy model parameters 475 should comprise the exact same number of parameters as actual model parameters 445 considering that the trained actual model and the trained proxy model 470 are built on the same underlying implementation of the same machine learning algorithm.” Examiner notes that Szeto teaches a model parameters comprising the exact same number of parameters as each first model such that each model would have a final hidden layer whose position is always the penultimate layer and from which feature data is obtained) Claim 38 and 40, Szeto as modified teaches claims 21 and 29 above. Szeto further teaches: wherein the medical data (Szeto, para. 0043: “For example, if a medical institute has multiple locations and/or affiliations, e.g., a main hospital, physician offices, clinics, a secondary hospital, a hospital affiliation, each of these entities could have their own private data 122, private data server 124 and modeling engine 126, which may all be visible to each other, but not to a different entity.”) includes personal information (Szeto, para. 0018: “Private or restricted features of the local private data include, but are not limited to, social security numbers, patient names, addresses or any other personally identifying information, especially information protected under the HIPAA Act.”), and the personal information is de-identified in the feature data. (Szeto, para. 0076: “In some aspects, the trained machine learning model and proxy data generation can be considered as a form of lossy compression. Similar to lossy compression, transformation of original data into proxy data preserves key characteristics of the data, but does not retain granularity with regard to individual patients.”) Claim 43 and 44, Szeto as modified teaches claims 21 and 29 above. Szeto further teaches: wherein the feature data is obtained by performing an operation on the at least one sub-feature data, and wherein the operation includes bitwise operators or arithmetic operations. (Szeto, para. 0095: “Similarity score 490 can be calculated through various techniques and according to the goals of a researcher as outlined in the corresponding model instructions. In some embodiments, similarity score 490 can be calculated based on the differences among the model parameters (e.g., parameters differences 480). For example, similarity score 490 could include the sum of the differences or the sum of the squares of the differences, a metric distance between parameters, a difference of covariance, differences of elements in covariance matrices, etc.” Examiner notes for examination purposes only, model parameters are interpreted as feature data). Response to Applicant Arguments/Remarks 35 U.S.C. 103 Starting towards the bottom of page 7 of applicant’s remarks, applicant argues that Szeto in view of Holtham does not teach the newly amended independent claim 21. However, the claim limitations are indeed taught by Szeto in view of Holtham, as set forth above. In particular, applicant argues that Szeto teaches a private data services creating the trained actual model but does not teach the particular limitations as newly amended. Szeto teaches a plurality of private institutes each with their own private models, as set forth above in Szeto paragraph 0043: For example, if a medical institute has multiple locations and/or affiliations, e.g., a main hospital, physician offices, clinics, a secondary hospital, a hospital affiliation, each of these entities could have their own private data 122, private data server 124 and modeling engine 126, which may all be visible to each other, but not to a different entity. Also see Szeto paragraph 0127: “The modeling engine can then create local trained actual models from the original images and training information” Szeto additionally teaches selecting a private model i.e. a “sub-AI model” from one of the private institutions, to create a global model, as set forth above. Szeto further teaches the modeling engine of the private institution performing similarity comparisons vis a vis thresholds. As a result, applicant’s newly amended claim 21 does not traverse the prior art of record. Claim 29 and dependent claims remain similarly rejected for at least the same reasons. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Sally T. Ley whose telephone number is (571)272-3406. The examiner can normally be reached Monday - Thursday, 10:00am - 6:00pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached at (571) 270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /STL/Examiner, Art Unit 2147 /VIKER A LAMARDO/Supervisory Patent Examiner, Art Unit 2147
Read full office action

Prosecution Timeline

Oct 22, 2020
Application Filed
Apr 22, 2022
Response after Non-Final Action
Oct 02, 2023
Non-Final Rejection — §103
Jan 03, 2024
Response Filed
Mar 13, 2024
Final Rejection — §103
May 16, 2024
Response after Non-Final Action
Jul 05, 2024
Applicant Interview (Telephonic)
Jul 05, 2024
Response after Non-Final Action
Aug 01, 2024
Request for Continued Examination
Aug 06, 2024
Response after Non-Final Action
Nov 26, 2024
Non-Final Rejection — §103
Feb 25, 2025
Response Filed
Mar 19, 2025
Final Rejection — §103
May 09, 2025
Interview Requested
Jun 03, 2025
Applicant Interview (Telephonic)
Jun 04, 2025
Examiner Interview Summary
Jun 17, 2025
Request for Continued Examination
Jun 18, 2025
Response after Non-Final Action
Jul 17, 2025
Non-Final Rejection — §103
Nov 03, 2025
Interview Requested
Nov 19, 2025
Applicant Interview (Telephonic)
Nov 19, 2025
Examiner Interview Summary
Nov 24, 2025
Response Filed
Dec 06, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12443830
COMPRESSED WEIGHT DISTRIBUTION IN NETWORKS OF NEURAL PROCESSORS
2y 5m to grant Granted Oct 14, 2025
Patent 12135927
EXPERT-IN-THE-LOOP AI FOR MATERIALS DISCOVERY
2y 5m to grant Granted Nov 05, 2024
Patent 11880776
GRAPH NEURAL NETWORK (GNN)-BASED PREDICTION SYSTEM FOR TOTAL ORGANIC CARBON (TOC) IN SHALE
2y 5m to grant Granted Jan 23, 2024
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
15%
Grant Probability
44%
With Interview (+28.8%)
3y 10m
Median Time to Grant
High
PTA Risk
Based on 33 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month