Prosecution Insights
Last updated: April 19, 2026
Application No. 17/271,036

GENERATING METADATA FOR TRAINED MODEL

Final Rejection §101§102§103§112
Filed
Feb 24, 2021
Examiner
GERMICK, JOHNATHAN R
Art Unit
2122
Tech Center
2100 — Computer Architecture & Software
Assignee
Koninklijke Philips N V
OA Round
4 (Final)
47%
Grant Probability
Moderate
5-6
OA Rounds
4y 2m
To Grant
79%
With Interview

Examiner Intelligence

Grants 47% of resolved cases
47%
Career Allow Rate
43 granted / 91 resolved
-7.7% vs TC avg
Strong +32% interview lift
Without
With
+32.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
28 currently pending
Career history
119
Total Applications
across all art units

Statute-Specific Performance

§101
29.0%
-11.0% vs TC avg
§103
38.5%
-1.5% vs TC avg
§102
17.3%
-22.7% vs TC avg
§112
14.3%
-25.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 91 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION This action is responsive to the Application filed on 02/09/2026. Claims 1-20 are pending in the case. Claims 1, 10 and 15 are independent claims. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 02/09/2026 have been fully considered but they are not persuasive. With respect to 35 U.S.C. 101 Applicant appears to argue the claims do not recite a limitation that can be performed in the human mind. To support this argument Applicant argues that the human mind is not equipped to “determine a numerical characteristics…” as claimed. Applicant highlights that, in light of the specification paragraph 0047, the training data may be too computationally complex to calculate a descriptive numerical characteristic. Examiner disagrees. In fact, specification paragraph 0146 notes “Although the above describes the numerical characteristic to be a probability distribution, various other types of numerical characterizations may be used as well, as known per se from field of statistics”. This would suggest any sufficiently simple statistic computable in the human mind may correspond to the claimed determination of a numerical characteristic. The specification merely provides an example which may be complex, the specification does not appear to provide explanation of how the numerical characteristic is determined such that it can not be performed in the mind. Applicant appears to argue that the claims demonstrate an improvement to technology because the invention provides the improved technical function determining conformance of input data of the trained model to the numerical characteristic. Examiner disagrees. Such an improvement is not an improvement to the functioning of a technology. The cited improvement is plainly an improvement to the abstract idea, i.e ensuring expected performance by determining when the input data is in conformance. This improvement is the result of the recited abstract idea alone. Therefore, the improvement is not reflected in any other the recited additional elements. Indeed, the application may be broadly applicable to a solution to a problem of inaccurate classification, however as evidenced by the multi-prong analysis presented in the rejection, the improvement is not reflected in the additional element. Therefore, the rejection is maintained. With respect to 35 U.S.C. 112 rejection Applicant argues specification paragraph 0044 provide support for the claimed features. Examiner disagrees. At best the cited section provides support for “training data on which the trained model is trained; and a processor subsystem configured to: apply the trained model to the training data to obtain intermediate output of the trained model”. The cited section makes no mention of “the training data to which the trained model is applied” being the same data as “the training data that was used to train the trained”. As a concrete example the description in the specification may very well be describing training data of two partitions A and B. The A partition is used in the “the training data to which the trained model is applied”, while the second partition, B, is used in “the training data that was used to train the trained”. In this way, the specification describes two separate processes which use training data, however the training data is not “the same” in each process. The claim is therefore understood to describe two processes which use training data, but not necessarily the very same training data for both processes. With respect to art rejections Applicant argues that the cited art does not teach “apply the trained model to the training data, but rather describes the systems performance at test time on different input images. Examiner disagrees. Training data as claimed broadly includes the training data set composed of labeled training data for training, as well as validation data and test data for assessing the training performance. Therefore, applying validation or test data amounts to applying the training data as claimed. Applicant argues there is no basis for the validation data set to be from the same original dataset as the training images, noting different images are used for different training, validation and testing. Applicant further highlights that different images are used for different phases and therefore does not teach the cited claim limitation. Applicant further notes that the images the claim does not merely require the same type of data or image data, but rather the same training data “to which the trained model is applied….[and] training data that was used to train the trained model” Examiner disagrees. The claim does not require the data all being sourced from a particular data set. Nevertheless, the cited section appears to describe a 2012 data set composed of images for training, images for validation, and images for testing. Simply having images partitioned for different uses does not suggest they do not come from the same data set. As noted previously the claims do not disallow using different images for different phases (training and validation), it is understood that applying images during validation corresponds to applying the training data to the trained model as claimed. The validation/test data is the same as the training data insofar as it is either, 1) from the same source dataset 2) belonging to the same categories of possible images 3) the same type of data, namely image data. In summary, Applicant appears to narrowly read the “training data” as data which is identified as being used specifically for a “training a model”, and further that the claimed invention is different from the art because that “same” training data is used again to obtain intermediate output of a trained model. Examiner disagrees with this interpretation. Training data is broadly data associated with a training process, even validation or test data used to assess the performance of a trained model is considered training data. That is not to say that “any image data” or that “all image data” is training data, but rather data associated with the training process such as validation data can also be considered “training data”. In this way, applying the validation or test data to obtain intermediate outputs corresponds to the claim limitation. For the above reasons the rejection is maintained. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Applicant suggests support for the amendments can be found in paragraphs [0044] – [0047] and [0137] – [0146]. The disclosure makes no mention of “the training data to which the trained model is applied is same data as the training data that was used to train the trained model” (as in claim 1). The specification, paragraph [00137-00138] notes that model is “applied to both out-of-spec data and the training data T”. Additionally, paragraph [00144] describes “With further reference to the out-of-spec data: instead, or in addition to this data being generated by a GAN 310, the data may be acquired from elsewhere, e.g., in a same or similar manner as the training data T is acquired”. In contrast to the amended claims, the specification appears to describe differing data applied to the model. There does not appear to be any description in the specification as to what constitutes two data items as being “same data”. At most, the specification describes that the data is the same insofar as it is acquired in a similar manner. Claims 10 and 15 are rejected for reciting the same limitations. Dependent claims rejected by virtue of dependency. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claims are directed to an abstract idea without significantly more. Regarding Claim 1/10 Under step 1, claim 1 is directed to A system for processing a trained model, which is directed to a machine, one of the statutory categories. Under step 1, claim 10 is directed to A computer implemented method which is directed to a process, one of the statutory categories. Under Step 2A Prong 1, the claim recites the following limitations which are considered mental evaluations “determine a numerical characteristic descriptive of the training data of the trained model based on the intermediate output of the trained model, encode the numerical characteristic as metadata… associate the metadata with the model data… and determine whether the input data is in specification to the training data of the trained model based on the encoded numerical characteristic and the further intermediate output” Determining, encoding and associating data is all decision about data which can be performed in the mind. For example, the text “Tuesday” can be determined to have 7 letters and thus encoded with a number “7”. The number can be associated with metadata pertaining to length of a string for a word thus enabling use of the data to make further decisions. The claim therefore recites an abstract idea. Step 2A Prong Two Analysis: The judicial exception in not integrated into a practical application. In particular, the claims recite the additional element(s) the limitations “apply the trained model to the training data to obtain intermediate output of the trained model, wherein the obtained intermediate output comprises activation values of a subset of hidden units of the trained model, and wherein the training data to which the trained model is applied is same data as the training data that was used to train the trained model… apply the trained model to input data to obtain further intermediate output of the trained model” amounts to mere instructions to apply a computer technology to an abstract idea, see MPEP 2106.05(f) consideration. In addition, the claim recites additional element(s) “a data interface configured to access;” that amounts to adding insignificant extra-solution activity to the judicial exception, because the limitations is mere data gathering. See MPEP 2106.05(g). Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Further, the additional elements, “a data interface configured to access” are insignificant extra-solution activities that are considered well-understood, routine, conventional activities. Examiner notes that accessing data amounts to receiving or transmitting data over a network (MPEP 2106.05(d)(II)(i). According to MPEP 2106.05(d)(II)(i), “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner”. As such, the insignificant extra-solution activities are considered well-understood, routine, conventional activities. Therefore, the claim is not patent eligible. Regarding Claim 2 Under step 1, the claim is directed to a machine. Under Step 2A Prong 1, the claim does not recite any additional abstract ideas beyond those described in the parent claim. Step 2A Prong Two and 2B Analysis: The judicial exception in not integrated into a practical application. In particular, the claims recite the additional element(s) the limitations “wherein the trained model is a trained neural network” is generally linking the use of the judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h). Accordingly, the recited additional elements, when taken alone or in combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea, nor do they amount to significantly more than the judicial exception because they do not impose any meaningful limits on practicing the abstract idea. Regarding Claim 3 Under step 1, the claim is directed to a machine. Under Step 2A Prong 1, the claim recites the following limitations which are considered mental evaluations “determine the numerical characteristic as a probability distribution of the multiple sets of activation values”. The claim therefore recites and abstract idea. Step 2A Prong Two and 2B Analysis: The judicial exception in not integrated into a practical application. In particular, the claims recite the additional element(s) the limitations “wherein the training data comprises multiple training data objects, and wherein the processor subsystem is configured to: apply the trained model to individual ones of the multiple training data objects to obtain multiple sets of activation values;” amounts to mere instructions to apply a computer technology to an abstract idea, see MPEP 2106.05(f) consideration. Accordingly, the recited additional elements, when taken alone or in combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea, nor do they amount to significantly more than the judicial exception because they do not impose any meaningful limits on practicing the abstract idea. Regarding Claim 4 Under step 1, the claim is directed to a machine. Under Step 2A Prong 1, the claim recites the following limitations which are considered mental evaluations “select the subset of hidden units to establish a difference, or to increase or maximize the difference, between a) the probability distribution of the multiple sets of activation values and b) a probability distribution of the further multiple sets of activation values”. Selecting of certain entities can be performed in the mind. The claim therefore recites and abstract idea. Step 2A Prong Two Analysis: The judicial exception in not integrated into a practical application. In particular, the claims recite the additional element(s) the limitations “apply the trained neural network to individual ones of the multiple out-of-spec data objects to obtain further multiple sets of activation values; and;” amounts to mere instructions to apply a computer technology to an abstract idea, see MPEP 2106.05(f) consideration. In addition, the claim recites additional element(s) “obtain out-of-spec data comprising multiple out-of-spec data objects which have characteristics that are out-of-specification from the multiple training data objects;” that amounts to adding insignificant extra-solution activity to the judicial exception, because the limitations is mere data gathering. See MPEP 2106.05(g). Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Further, the additional elements previously identified as insignificant extra-solution activities are considered well-understood, routine, conventional activities. Examiner notes that obtaining certain data amounts to receiving or transmitting data over a network (MPEP 2106.05(d)(II)(i). According to MPEP 2106.05(d)(II)(i), “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner”. As such, the insignificant extra-solution activities are considered well-understood, routine, conventional activities. Therefore, the claim is not patent eligible. Regarding Claim 5 Under step 1, the claim is directed to a machine. Under Step 2A Prong 1, the claim recites the following limitations which are considered mental evaluations “to select the subset of hidden units by a combinatorial optimization method which optimizes the difference between a) the probability distribution of the multiple sets of activation values and b) the probability distribution of the further multiple sets of activation values, as a function of selected hidden units”. Selecting of certain entities can be performed in the mind. The claim therefore recites and abstract idea. There are no additional elements recites so the claim does not provide a practical application and is not considered to be significantly more. Regarding Claim 6 Under step 1, the claim is directed to a machine. Under Step 2A Prong 1, the claim recites the following limitations which are considered mental evaluations “express the difference as or based on at least one of the group of: a Kullback-Leibler divergence measure, a cross entropy measure, and a mutual information measure”. Choosing to express a difference as a measure of mutual information is step performed in the mind. The claim therefore recites and abstract idea. There are no additional elements recites so the claim does not provide a practical application and is not considered to be significantly more Regarding Claim 7 Under step 1, the claim is directed to a machine. Under Step 2A Prong 1, the claim recites the following limitations which are considered mental evaluations “to generate negative samples based on the training data; generate the out-of-spec data from the negative samples”. Without additional detail, generating samples amounts to selecting new samples from a set of data, such selection can be performed in the mind. The claim therefore recites and abstract idea. Step 2A Prong Two and 2B Analysis: The judicial exception in not integrated into a practical application. In particular, the claims recite the additional element(s) the limitations “use a generator part of a generative adversarial network to generate negative samples;” amounts to mere instructions to apply a computer technology to an abstract idea, see MPEP 2106.05(f) consideration. Accordingly, the recited additional elements, when taken alone or in combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea, nor do they amount to significantly more than the judicial exception because they do not impose any meaningful limits on practicing the abstract idea. Regarding Claim 8 Under step 1, the claim is directed to a machine. Under Step 2A Prong 1, the claim does not recite any additional abstract ideas beyond those described in the parent claim. Step 2A Prong Two and 2B Analysis: The judicial exception in not integrated into a practical application. In particular, the claims recite the additional element(s) the limitations “generate the model data by training a model using the training data to obtain the trained model;” amounts to mere instructions to apply a computer technology to an abstract idea, see MPEP 2106.05(f). Accordingly, the recited additional elements, when taken alone or in combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea, nor do they amount to significantly more than the judicial exception because they do not impose any meaningful limits on practicing the abstract idea. Regarding Claim 9 Under step 1, the claim is directed to a machine. Under Step 2A Prong 1, the claim does not recite any additional abstract ideas beyond those described in the parent claim. Step 2A Prong Two and 2B Analysis: The judicial exception in not integrated into a practical application. In particular, the claims recite the additional element(s) the limitations “wherein the training data comprises multiple images, and wherein the trained model is configured for image classification or image segmentation.” is generally linking the use of the judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h). Accordingly, the recited additional elements, when taken alone or in combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea, nor do they amount to significantly more than the judicial exception because they do not impose any meaningful limits on practicing the abstract idea. Regarding Claim 11 Under step 1, the claim is directed to a machine. Under Step 2A Prong 1, these limitations do not describe additional abstract ideas beyond those described in the parent claim. Step 2A Prong Two and 2B Analysis: The judicial exception in not integrated into a practical application. In particular, the claims recite the additional element(s) the limitations “if the input data is determined not to be in-specification generate an output signal ” amounts to mere instructions to apply a computer technology to an abstract idea, see MPEP 2106.05(f) consideration. Accordingly, the recited additional elements, when taken alone or in combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea, nor do they amount to significantly more than the judicial exception because they do not impose any meaningful limits on practicing the abstract idea. Regarding Claim 12 Under step 1, the claim is directed to a machine. Under Step 2A Prong 1, the claim does not recite any additional abstract ideas beyond those described in the parent claim. Step 2A Prong Two and 2B Analysis: The judicial exception in not integrated into a practical application. In particular, the claims recite the additional element(s) the limitations “an output interface for outputting the output signal to a rendering device for rendering the output signal to a user.” is generally linking the use of the judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h). Accordingly, the recited additional elements, when taken alone or in combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea, nor do they amount to significantly more than the judicial exception because they do not impose any meaningful limits on practicing the abstract idea. Regarding Claim 13 Under step 1, the claim is directed to a machine. Under Step 2A Prong 1, the claim recites the following limitations which are considered mental evaluations “wherein the numerical characteristic is a probability distribution obtained from multiple sets of activation values of a subset of hidden units of the trained neural network… determine a probability of the further set of activation values based on the probability distribution; and determine whether the input data is in-specification as a function of the probability ”. Each of these are determinations about data made in the human mind. The claim therefore recites and abstract idea. Step 2A and 2B Prong Two Analysis: The judicial exception in not integrated into a practical application. In particular, the claims recite the additional element(s) the limitations “wherein the multiple sets of activation values are obtained by applying the trained model to the training data… and wherein the processor subsystem is configured to:” amounts to mere instructions to apply a computer technology to an abstract idea, see MPEP 2106.05(f) consideration. Additionally, the claims recite the additional element(s) the limitations “wherein the trained model is a trained neural network… wherein the multiple sets of activation values are obtained by applying the trained model to the training data, wherein the further intermediate output of the trained model comprises a further set of activation values of the subset of hidden units” is generally linking the use of the judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h). Accordingly, the recited additional elements, when taken alone or in combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea, nor do they amount to significantly more than the judicial exception because they do not impose any meaningful limits on practicing the Regarding Claim 14 Under step 1, the claim is directed to a method. Under Step 2A Prong 1, these limitations do not describe additional abstract ideas beyond those described in the parent claim. Step 2A Prong Two and 2B Analysis: The judicial exception in not integrated into a practical application. In particular, the claims recite the additional element(s) the limitations “if the input data is determined to be in-specification, generating an output signal” amounts to mere instructions to apply a computer technology to an abstract idea, see MPEP 2106.05(f) consideration. Accordingly, the recited additional elements, when taken alone or in combination, do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea, nor do they amount to significantly more than the judicial exception because they do not impose any meaningful limits on practicing the abstract idea. Regarding Claim 15 Under step 1, claim 15 is directed to A non-transitory computer-readable medium which is directed to a product of manufacture, one of the statutory categories. Under Step 2A Prong 1, the claim recites the following limitations which are considered mental evaluations “determine a numerical characteristic descriptive of the training data of the trained model based on the intermediate output of the trained model; encode the numerical characteristic as metadata; associate the metadata with the model data… determine whether the input data is in-specification to the training data of the trained model based on the encoded numerical characteristic and the further intermediate output”. Determining features about data is a step which can be performed in the human mind. The claim therefore recites an abstract idea. Step 2A Prong Two Analysis: The judicial exception in not integrated into a practical application. In particular, the claims recite the additional element(s) the limitations “having stored instructions that, when executed by one or more processors, cause the one or more processors to… apply the trained model to the training data to obtain intermediate output of the trained model, wherein the obtained intermediate output comprises activation values of a subset of hidden units of the trained model, and wherein the training data to which the trained model is applied is same data as the training data that was used to train the trained model… apply the trained model to the input data to obtain a further intermediate output of the trained model… and if the input data is determined not to be in-specification, generate an output signal indicative of the input data not being in-specification” amounts to mere instructions to apply a computer technology to an abstract idea, see MPEP 2106.05(f) consideration. In addition, the claim recites additional element(s) “access model data representing a trained model trained for image classification or image segmentation, wherein the trained model was trained using training data” that amounts to adding insignificant extra-solution activity to the judicial exception, because the limitations are mere data gathering. See MPEP 2106.05(g). Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Step 2B Analysis: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Further, the additional elements, identified above as insignificant extra-solution activities are considered well-understood, routine, conventional activities. Examiner notes that accessing or outputting data amounts to receiving or transmitting data over a network (MPEP 2106.05(d)(II)(i). According to MPEP 2106.05(d)(II)(i), “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner”. As such, the insignificant extra-solution activities are considered well-understood, routine, conventional activities. Therefore, the claim is not patent eligible. Regarding Claim 16 Claim 16 is rejected for the reasons set forth in the rejection of claim 2 in connection with claim 15 Regarding Claim 17 Claim 17 is rejected for the reasons set forth in the rejection of claim 3 in connection with claim 15 Regarding Claim 18 Claim 18 is rejected for the reasons set forth in the rejection of claim 4 in connection with claim 15 Regarding Claim 19 Claim 19 is rejected for the reasons set forth in the rejection of claim 5 in connection with claim 15 Regarding Claim 20 Claim 20 is rejected for the reasons set forth in the rejection of claim 7 in connection with claim 15 Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-5, 8-19 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Bendale et al. “Towards Open Set Deep Networks” Regarding Claim 1 Bendale teaches, A system for processing a trained model, the system comprising: a data interface configured to access model data representing a trained model trained for image classification or image segmentation, and training data on which the trained model is trained; ( pg 2 Figure 1 caption “The OpenMax algorithm measures distance between an activation vector (AV) for an input and the model vector for the top few classes, adjusting scores and providing an estimate of probability of being unknown. The left side shows activation vectors (AV) for different images, with different AVs separated by black lines. Each input image becomes an AV, displayed as 10x450 color pixels… we show an AV for 4 types of images: the model, a real image, a fooling image and an open set image.” The system shows the training data images on which the model is trained, and their activations or model data via an interface for visual inspection.) and a processor subsystem configured apply the trained model to the training data to obtain intermediate output of the trained model, wherein the obtained intermediate output comprises activation values of a subset of hidden units of the trained model, (pg 2 para 01 “We use the scores from the penultimate layer of deep networks (the fully connected layer before SoftMax, e.g., FC8) to estimate if the input is “far” from known training data. We call scores in that layer the activation vector(AV). This information is incorporated in our OpenMax model and used to characterize failure of recognition system” The activation vector of the penultimate layer is the intermediate output of the trained deep network or trained model and thus “of a subset of hidden units” as claimed. Examiner notes that during application of various training data a network is a various stages of trained thus a “trained model” as claimed.) and wherein the training data to which the trained model is applied is same data as the training data that was used to train the trained model (Section 3 pg 7-8 “Our evaluation is based on ImageNet Large Scale Visual Recognition Competition (ILSVRC) 2012 dataset with 1K visual categories. The dataset contains around 1.3M images for training (with approximately 1K to 1.3K images per category), 50K images for validation and 150K images for testing. Since test labels for ILSVRC 2012 are not publicly available, like others have done we report performance on validation set… During the testing phase, we test the system with all the 1000 categories from ILSVRC 2012 validation set, fooling categories and previously unseen categories… We use fooling images provide” the system is tested on data from the validation set, which is data from the same original dataset as the training images. Further, the system is tested on other fooling and unseen categories which are the same “type” of data insofar as all the data is image data.) determine a numerical characteristic descriptive of the training data of the trained model based on the intermediate output of the trained model, (pg 3 Section 2.1 para 2 “Each class is represented as a point, a mean activation vector (MAV) with the mean computed over only the correctly classified training examples (line 2 of Alg. 1)” the mean is a numerical characteristic of the activation vector, which is the intermediate output noted above) encode the numerical characteristic as metadata and ( pg 3 Section 2.1 para 03 “Given the MAV and an input image, we measure distance between them” the distance between the input image and the MAV is considered the metadata, this is also shown in the steps of Algorithm 1 pg 4. The BRI of metadata is simply information or data about other data.) associate the metadata with the model data apply the trained model to input data to obtain further intermediate output of the trained model (pg 4 para 01-02 “we seek a per class metarecognition model. In particular, on line 3 of Alg. 1 we use the libMR…FitHigh function to do Weibull fitting on the largest of the distances between all correct positive training instances and the associated μi. This results in a parameter ρi, which is used to estimate the probability of an input being an outlier with respect to class I… Given ρi, a simple rejection model would be for the user to define a threshold that decides if an input should be rejected” the parameter ρi is used to associate the distance or metadata with the probability with respect to a class. Given such a parameter the data applied to the model can be determined whether to be rejected or not by an entity.) determine whether the input data in specification to the training data of the trained model based on the encoded numerical characteristic and the further intermediate output. (pg 4 “In particular, on line 3 of Alg. 1 we use the libMR [22] FitHigh function to do Weibull fitting on the largest of the distances between all correct positive training instances and the associated µ…Given ρi, a simple rejection model would be for the user to define a threshold that decides if an input should be rejected” the algorithm checks that the input data of the training data is in specification by computing a fitness, which is based on the encoded input items and intermediate output) Regarding Claim 2 Bendale teaches claim 1 Bendale teaches, wherein the trained model is a trained neural network, (pg 7 Section 3 para 1 (“We use a pre-trained AlexNet (BVLC AlexNet) deep neural network provided by the Caffe software package”) Regarding Claim 3 Bendale teaches claim 2 Bendale teaches, wherein the training data comprises multiple training data objects, and wherein the processor subsystem is configured to: (pg 4 Algorithm 1 algorithm 1 from the art is provided for reference: PNG media_image1.png 155 438 media_image1.png Greyscale The entity x includes a subscripts i and j to indicate multiple training data objects in a array or vector. Additionally pg 7 Figure 3 caption notes “The test uses 80,000 images, with 50,000 validation images from ILSVRC 2012, 15,000 fooling images and 15,000 “unknown” images” the test set is part of the training data objects to characterize the model.) apply the trained model to individual ones of the multiple training data objects to obtain multiple sets of activation values; and (Figure 1 caption “Each input image becomes an AV, displayed as 10x450 color pixels” also describes the activation value computed for “each input” thus multiple training objects ) determine the numerical characteristic as a probability distribution of the multiple sets of activation values. (pg 4 para 1 “Weibull fitting on the largest of the distances between all correct positive training instances and the associated μi. This results in a parameter ρi, which is used to estimate the probability of an input being an outlier with respect to class i” pg 4 column algorithm 1, included above. The mean is captured as a single point for multiple sets of activations values v. The set of all means across j classes is used to estimate a distribution of probabilities across the classes.) Regarding Claim 4 Bendale teaches claim 3 Bendale teaches, wherein the processor subsystem is configured to: obtain out-of-spec data comprising multiple out-of-spec data objects which have characteristics that are out-of-specification from the multiple training data objects; (pg 3 para 1 “In Fig. 1, we show examples of activation patterns for our model, input images, fooling images, adversarial images (that the system can reject) and open set images… Experimental analysis of the effectiveness of open set deep networks at rejecting unknown classes, fooling images and obvious errors from adversarial images” open set images are described as unknown images whose true labels belong to categories outside of the expected classification thus corresponding to out-of-spec data objects.) apply the trained neural network to individual ones of the multiple out-of-spec data objects to obtain further multiple sets of activation values; ( pg 2 Figure 1 caption “Each example shows the SoftMax (SM) and OpenMax (OM) scores for the real image, the fooling and open set image that produced the AV shown on the left” the multiple open set images (i.e out-of-spec) are applied to the network shown in the figure and caption.) select the subset of hidden units to establish a difference, or to increase or maximize the difference, between a) the probability distribution of the multiple sets of activation values and b) a probability distribution of the further multiple sets of activation values. ( examiner notes the claim is alternate form only the underlined portion is required by the claim. pg 4 para 01 “This results in a parameter ρi, which is used to estimate the probability of an input being an outlier with respect to class i” as noted before a probability distribution over classes is established for each multiple sets of activation values. Section 2.1 pg 3 “Furthermore, a direct EVT fitting on the set of class post recognition scores (SoftMax layer) is not meaningful with deep networks, because the final SoftMax layer is intentionally renormalized to follow a logistic distribution. Thus, we analyze the penultimate layer, which is generally viewed as a per-class estimation. This per-class estimation is converted by SoftMax function into the final output probabilities.” The penultimate layer is the selected subset of hidden units used to establish a difference between distributions of activation values and future values which belong to out of set samples. Figure 3 caption “Figure 3. OpenMax and SoftMax-w/threshold performance shown as F-measure as a function of threshold on output probabilities.” The figure shows that the OpenMax methods improves the correct rejection rate thus the difference between the ‘in spec” and “out of spec” data has been established, via selection of the penultimate layer activations.) Regarding Claim 5 Bendale teaches claim 4 Bendale teaches, wherein the processor subsystem is configured to select the subset of hidden units by a combinatorial optimization method which optimizes the difference between a) the probability distribution of the multiple sets of activation values and b) the probability distribution of the further multiple sets of activation values, as a function of selected hidden units. Figure 3 caption “Figure 3. OpenMax and SoftMax-w/threshold performance shown as F-measure as a function of threshold on output probabilities.” The figure shows that the OpenMax methods improves or optimizes the correct rejection rate thus the difference between the ‘in spec” and “out of spec” has been optimized, via selection of a combination of activations values in the penultimate layer activations.) Regarding Claim 8 Bendale teaches claim 1 Bendale teaches, configured to generate the model data by training a model using the training data to obtain the trained model. (pg 8 “We apply 1-vs-set open set algorithm[20] to the FC8 data. We used liblinear to train a linear SVM on the training samples from the 1000 classes. We also trained a 1-vs-set machine using the liblinear extension cited in [1], refining it on the training data for the 1000 classes. The 1-Vs-Set algorithm achieves an overall F-measure of only .407, which is much lower than the .595 of the OpenMax approach.” A model is trained to generate model data using training data of 1000 classes to obtain the trained model.) Regarding Claim 9 Bendale teaches claim 1 Bendale teaches, wherein the training data comprises multiple images, and wherein the trained model is configured for image classification or image segmentation. (pg 6 Section 3 “Our evaluation is based on ImageNet Large Scale Visual Recognition Competition (ILSVRC) 2012 dataset with 1K visual categories. The dataset contains around 1.3M images” pg 7 Training phase “The MAV vector is computed for each class by considering the training examples that deep networks training classified correctly for the respective class” the model performs classification of the image training examples) Regarding Claim 10 Bendale teaches, A computer-implemented method of processing a trained model, comprising ( pg 2 Figure 1 caption “The OpenMax algorithm measures distance between an activation vector (AV) for an input and the model vector for the top few classes, adjusting scores and providing an estimate of probability of being unknown. The left side shows activation vectors (AV) for different images, with different AVs separated by black lines. Each input image becomes an AV, displayed as 10x450 color pixels… we show an AV for 4 types of images: the model, a real image, a fooling image and an open set image.” The algorithm is for processing images of a trained model) The remaining limitations are rejected for the reasons set forth in the rejection of claim 1 Regarding Claim 11 Bendale teaches claim 1 Bendale teaches, if the input data is determined not to be in-specification, generate an output signal ( Figure 1 caption pg 2 “The red OM scores implies the OM algorithm classified the image as unknown” a red score is a visual symbol that the input data image is not in-spec thus corresponds to generating an output signal.) Regarding Claim 12 Bendale teaches claim 11 Bendale teaches, an output interface for outputting the output signal to a rendering device for rendering the output signal in a sensory perceptible manner to a user (Figure 1 pg 2 the figure shows that an interface outputs images and their scores visually perceptible to a user.) Regarding Claim 13 Bendale teaches claim 11 Bendale teaches, wherein the trained model is a trained neural network, (pg 7 Section 3 para 1 (“We use a pre-trained AlexNet (BVLC AlexNet) deep neural network provided by the Caffe software package”) wherein the numerical characteristic is a probability distribution obtained from multiple sets of activation values of a subset of hidden units of the trained neural network, wherein the multiple sets of activation values are obtained by applying the trained model to the training data, (pg 4 para 1 “Weibull fitting on the largest of the distances between all correct positive training instances and the associated μi. This results in a parameter ρi, which is used to estimate the probability of an input being an outlier with respect to class i” pg 4 column algorithm 1, included above. The mean is captured as a single point for multiple sets of activations values v. The set of all means across j classes is used to estimate a distribution of probabilities across the classes. The activation values are a result of applying training data to the model.) wherein the further intermediate output of the trained model comprises a further set of activation values of the subset of hidden units, ( pg 2 Figure 1 caption “Each example shows the SoftMax (SM) and OpenMax (OM) scores for the real image, the fooling and open set image that produced the AV shown on the left” the multiple open set images (i.e out-of-spec) are applied to the network shown in the figure and caption.) determine a probability of the further set of activation values based on the probability distribution; and determine whether the input data is in-specification as a function of the probability. (pg 4 para 01 “This results in a parameter ρi, which is used to estimate the probability of an input being an outlier with respect to class i” the probability is based on the distribution over a set of classes. Determining an outlier is a determination of in-specification.) Regarding Claim 14 Bendale teaches claim 10 The limitations are rejected for the reasons set forth in the rejection of claim 11 in connection with claim 10 Regarding Claim 15 Bendale teaches, A non-transitory computer-readable medium having stored instructions that, when executed by one or more processors, cause the one or more processors to ( pg 2 Figure 1 caption “The OpenMax algorithm measures distance between an activation vector (AV) for an input and the model vector for the top few classes, adjusting scores and providing an estimate of probability of being unknown. The left side shows activation vectors (AV) for different images, with different AVs separated by black lines. Each input image becomes an AV, displayed as 10x450 color pixels… we show an AV for 4 types of images: the model, a real image, a fooling image and an open set image.” The algorithm is for processing images of a trained model on a computer pg 7 “We use a pre-trained AlexNet (BVLC AlexNet) deep neural network provided by the Caffe software package” software is for use on a computer with a computer readable medium) access model data representing a trained model trained for image classification or image segmentation wherein the trained model was trained using training data ( pg 2 Figure 1 caption “The OpenMax algorithm measures distance between an activation vector (AV) for an input and the model vector for the top few classes, adjusting scores and providing an estimate of probability of being unknown. The left side shows activation vectors (AV) for different images, with different AVs separated by black lines. Each input image becomes an AV, displayed as 10x450 color pixels… we show an AV for 4 types of images: the model, a real image, a fooling image and an open set image… The red OM scores implies the OM algorithm classified the image as unknown” The system shows the training data images on which the model is trained, and their activations or model data via an interface for visual inspection. The scores amounts to meta data comprising a numerical characteristic) apply the trained model to the training data to obtain intermediate output of the trained model, wherein the obtained intermediate output comprises activation values of a subset of hidden units of the trained model, (pg 2 para 01 “We use the scores from the penultimate layer of deep networks (the fully connected layer before SoftMax, e.g., FC8) to estimate if the input is “far” from known training data. We call scores in that layer the activation vector(AV). This information is incorporated in our OpenMax model and used to characterize failure of recognition system” The activation vector of the penultimate layer is the intermediate output of the trained deep network or trained model. Examiner notes that during application of various training data a network is a various stages of trained thus a “trained model” as claimed.) and wherein the training data to which the trained model is applied is same data as the training data that was used to train the trained model (Section 3 pg 7-8 “Our evaluation is based on ImageNet Large Scale Visual Recognition Competition (ILSVRC) 2012 dataset with 1K visual categories. The dataset contains around 1.3M images for training (with approximately 1K to 1.3K images per category), 50K images for validation and 150K images for testing. Since test labels for ILSVRC 2012 are not publicly available, like others have done we report performance on validation set… During the testing phase, we test the system with all the 1000 categories from ILSVRC 2012 validation set, fooling categories and previously unseen categories… We use fooling images provide” the system is tested on data from the validation set, which is data from the same original dataset as the training images. Further, the system is tested on other fooling and unseen categories which are the same “type” of data insofar as all the data is image data.) determine a numerical characteristic descriptive of the training data of the trained model based on the intermediate output of the trained model; (pg 3 Section 2.1 para 2 “Each class is represented as a point, a mean activation vector (MAV) with the mean computed over only the correctly classified training examples (line 2 of Alg. 1)” the mean is a numerical characteristic of the activation vector, which is the intermediate output noted above) encode the numerical characteristic as metadata and ( pg 3 Section 2.1 para 03 “Given the MAV and an input image, we measure distance between them” the distance between the input image and the MAV is considered the metadata, this is also shown in the steps of Algorithm 1 pg 4. The BRI of metadata is simply information or data about other data.) associate the metadata with the model data apply the trained model to input data to obtain further intermediate output of the trained model (pg 4 para 01-02 “we seek a per class metarecognition model. In particular, on line 3 of Alg. 1 we use the libMR…FitHigh function to do Weibull fitting on the largest of the distances between all correct positive training instances and the associated μi. This results in a parameter ρi, which is used to estimate the probability of an input being an outlier with respect to class I… Given ρi, a simple rejection model would be for the user to define a threshold that decides if an input should be rejected” the parameter ρi is used to associate the distance or metadata with the probability with respect to a class. Given such a parameter the data applied to the model can be determined whether to be rejected or not by an entity. Examiner notes that the underlined portion is considered intended use of the metadata which however is described by the art.) determine whether the input data in specification to the training data of the trained model based on the encoded numerical characteristic and the further intermediate output. (pg 4 “In particular, on line 3 of Alg. 1 we use the libMR [22] FitHigh function to do Weibull fitting on the largest of the distances between all correct positive training instances and the associated µ…Given ρi, a simple rejection model would be for the user to define a threshold that decides if an input should be rejected” the algorithm checks that the input data of the training data is in specification by computing a fitness, which is based on the encoded input items and intermediate output) if the input data is determined not to be in-specification generate an output signal indicative of the input data not being in-specification ( Figure 1 caption pg 2 “The red OM scores implies the OM algorithm classified the image as unknown” a red score is a visual symbol that the input data image is not in specification and thus corresponds to generating an output signal indicative of not in specification.) Regarding Claim 16 Claim 16 is rejected for the reasons set forth in the rejection of claim 2 in connection with claim 15 Regarding Claim 17 Claim 17 is rejected for the reasons set forth in the rejection of claim 3 in connection with claim 15 Regarding Claim 18 Claim 18 is rejected for the reasons set forth in the rejection of claim 4 in connection with claim 15 Regarding Claim 19 Claim 19 is rejected for the reasons set forth in the rejection of claim 5 in connection with claim 15 Claim Rejections - 35 U.S.C. § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 6, 7 and 20 are rejected under 35 U.S.C. § 103 as being unpatentable over Bendale, further in view of Kardan et al “Fitted Learning: Models with Awareness of their Limits” Regarding Claim 6 Bendale teaches claim 5 Bendale does not explicitly teach, wherein the processor subsystem is configured to express the difference as or based on at least one of the group of: a Kullback-Leibler divergence measure, a cross entropy measure, and a mutual information measure Karden however when addressing optimization of open set data rejection teaches, wherein the processor subsystem is configured to express the difference as or based on at least one of the group of: a Kullback-Leibler divergence measure, a cross entropy measure, and a mutual information measure (pg 3 The Competitive Overcomplete Output Layer (COOL) “In particular, each output unit is replaced by an internally-competitive aggregate… During the training phase, all the member units of the same neuron aggregate are trained with the same value, i.e. zero or 1/ω, depending on the desired activation of the corresponding aggregate… a cross-entropy cost function is then applied straightforwardly to train such a network…. It is important to highlight the role of the two components of a COOL layer… in effect preventing output units from overgeneralizing into regions far away from the training instances. These experiments will suggest that internally-competitive aggregates are able to learn an appropriate probability distribution of their assigned concepts” the cross-entropy, as noted in the specification para. 0140, serves to train the model to distinguish express a difference between in spec and out of spec data. Preventing overgeneralization amounts to increasing a difference between in spec and out of spec data.) Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the neural network system which rejects out of spec data described by Bendale with the system described by Karden which is able to separate out of spec data via a cross entropy function. One would have been motivated to make such a combination because both Bendale and Karden are concerned with the Open Set problem for rejecting out of spec data. Karden notes that “the COOL mechanism can effectively prevent overgeneralization, suggesting the ability of these types of network to capture an implicit understanding of the data generation process” and “these results suggest that COOL can significantly improve the inhibition ability (rejection rate) of learning models, which in turn leads to a more accurate representation of the knowledge embedded in the dataset and robustness of the learned concepts.” (Karden pg 5 and 11 respectively) Regarding Claim 7 Bendale teaches claim 4 Bendale does not explicitly teach, use a generator part of a generative adversarial network to generate negative samples based on the training data; generate the out-of-spec data from the negative samples. Karden however when addressing the use of fooling images generated to train the model via a generator network teaches, use a generator part of a generative adversarial network to generate negative samples based on the training data; generate the out-of-spec data from the negative samples. ( pg 8-9 Generating Fooling Instances “A random input instance x is fed into a new trainable neural network g, called the fooling generator network (FGN), whose output is passed to the actual model f. In other words, g(x) is the fooling input to the network instead of x. Gradient descent can train network g such that g(x) generates a good fooling (false positive) image…. During this procedure, x and parameters of f are fixed and only g is being trained…. for a given input z…and target label” The generator, or generator part of a generative adversarial network to fool the target model, generates false images or negative samples based on target labels in the training data. The trained network generates out-of-spec data from these samples. ) Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the neural network system which rejects out of spec data described by Bendale with the system described by Karden which is able to separate out of spec data using a adversarial network. One would have been motivated to make such a combination because both Bendale and Karden are concerned with the Open Set problem for rejecting out of spec data. Karden notes that “This approach has several advantages:… a good choice of architecture can indirectly impose desirable constraints on the generated fooling (false positive) examples, e.g. when dealing with images, a convolutional FGN g imposes some natural image properties on the generated fooling…” (pg 9 Karden) Regarding Claim 20 Claim 20 is rejected for the reasons set forth in the rejection of claim 7 in connection with claim 15 Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHNATHAN R GERMICK whose telephone number is (571)272-8363. The examiner can normally be reached M-F 7:30-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached on 571-272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.R.G./ Examiner, Art Unit 2122 /KAKALI CHAKI/ Supervisory Patent Examiner, Art Unit 2122
Read full office action

Prosecution Timeline

Feb 24, 2021
Application Filed
Jan 17, 2025
Non-Final Rejection — §101, §102, §103
Apr 28, 2025
Response Filed
May 23, 2025
Final Rejection — §101, §102, §103
Aug 18, 2025
Request for Continued Examination
Aug 26, 2025
Response after Non-Final Action
Nov 06, 2025
Non-Final Rejection — §101, §102, §103
Feb 09, 2026
Response Filed
Mar 10, 2026
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566962
DITHERED QUANTIZATION OF PARAMETERS DURING TRAINING WITH A MACHINE LEARNING TOOL
2y 5m to grant Granted Mar 03, 2026
Patent 12566983
MACHINE LEARNING CLASSIFIERS PREDICTION CONFIDENCE AND EXPLANATION
2y 5m to grant Granted Mar 03, 2026
Patent 12554977
DEEP NEURAL NETWORK FOR MATCHING ENTITIES IN SEMI-STRUCTURED DATA
2y 5m to grant Granted Feb 17, 2026
Patent 12443829
NEURAL NETWORK PROCESSING METHOD AND APPARATUS BASED ON NESTED BIT REPRESENTATION
2y 5m to grant Granted Oct 14, 2025
Patent 12443868
QUANTUM ERROR MITIGATION USING HARDWARE-FRIENDLY PROBABILISTIC ERROR CORRECTION
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
47%
Grant Probability
79%
With Interview (+32.1%)
4y 2m
Median Time to Grant
High
PTA Risk
Based on 91 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month