Prosecution Insights
Last updated: April 19, 2026
Application No. 18/303,394

SYSTEM AND METHOD FOR DETECTING UNHANDLED APPLICATIONS IN CONTRASTIVE SIAMESE NETWORK TRAINING

Non-Final OA §101§103
Filed
Apr 19, 2023
Examiner
HUTCHESON, CODY DOUGLAS
Art Unit
2659
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
3 (Non-Final)
62%
Grant Probability
Moderate
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
15 granted / 24 resolved
+0.5% vs TC avg
Strong +47% interview lift
Without
With
+47.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
34 currently pending
Career history
58
Total Applications
across all art units

Statute-Specific Performance

§101
33.9%
-6.1% vs TC avg
§103
40.9%
+0.9% vs TC avg
§102
14.8%
-25.2% vs TC avg
§112
7.5%
-32.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 24 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/10/2025 has been entered. Response to Arguments 1. Regarding the rejection of claims 1-20 under 35 U.S.C. § 101, Applicant's arguments filed 10/10/2025 have been fully considered but they are not persuasive. Applicant argues that claims 1-20 are patent eligible as the claims are not directed to an abstract idea. Specifically, that the claims recite an improvement to pre-trained LLMs for classifying unhandled utterances (pg. 14, 1st para.; pg. 15, 1st para. of Remarks), and that the claims might involve a judicial exception but are not directed to an abstract idea (pg. 15 3rd para.). The Examiner respectfully disagrees. The claims as currently recited are directed to an abstract idea. Specifically, limitations reciting “receiving an input utterance”, “determining that the input utterance corresponds to an unhandled utterance and generating…an utterance embedding vector based on the input utterance, the utterance embedding vector representing the input utterance relative to the plurality of classes”, “obtaining…a predicted class within the plurality of classes for the input utterance, the predicted class identified based on a similarity of the predicted class to an expected class for a similar utterance…” as currently recited fall under the abstract idea grouping of mental processes under Step 2A Prong 1, as they recite processes which can be performed in the human mind using a pen and paper. A person can look at an input utterance (e.g. a sentence), and can write down information about the sentence in the form of an embedding (e.g. numerical vector). A person can further determine that the utterance is unhandled (e.g. can read and determine that the sentence is not directed towards a particular class/topic). A person further can make a determination for a predicted class by selecting the class with the highest similarity (select class 2 as a predicted class if class 2 similarity is highest). Furthermore, the steps of “the similarity…based on a spatial parameter representing a distance of the utterance embedding vector for the predicted class from the target embedding vector associated with the expected class”) and “passing the similarity between the predicted class and the expected class to a loss function and, using the at least one processing device, updating parameters for the pre-trained large language model mapping the input utterance to the plurality of classes, including the predicted class and the expected class” both describe mathematical calculations using words, and thus fall under the abstract idea grouping of mathematical concept. Therefore, claim 1 recites abstract ideas. Under Step 2A Prong 2 analysis, the claims are evaluated to determine if additional elements integrate the abstract idea into a practical application. The additional elements in claim 1 are “using a pre-trained large language model operating on at least one processing device of an electronic device…” and “generating, using the pre-trained large language model operating on the at least one processing device,an utterance embedding vector”, and “updating parameters, using the at least one processing device, updating parameters of the pre-trained large language model…”. These limitations amount to mere instructions to implement the judicial exception using a generic computer. The additional elements as currently recited do not integrate the judicial exception into a practical application as they do not provide any meaningful limits on practicing the abstract idea, and instead merely perform the mental processes “using the pre-trained large language model”. Furthermore, claim 1 as currently written does not reflect an improvement to classifying unhandled utterances using an LLM, as the claim does not apply the updated LLM model and use the output of the updated model to classify utterances. Therefore, claim 1 does not integrate the judicial exception into a practical application, and is directed to an abstract idea. Hence, Applicant’s arguments are not persuasive. 2. Regarding the rejections under 35 U.S.C. § 103, Applicant’s arguments with respect to claim 1 have been considered but they are not persuasive. Applicant argues that the Pan reference does not disclose or suggest an “input utterance corresponding to an unhandled utterance” (pg. 17, 2nd para. of Remarks) and that Pan does not describe predicting a cluster for an input feature vector that falls outside all cluster boundaries based on similarity to one of the existing clusters (pg. 17, 3rd para. of Remarks). The Examiner respectfully disagrees. Pan discloses a step in which a determination is made as to whether an input utterance corresponds to an unhandled utterance, based on determining whether a feature vector corresponding to the input utterance falls within or outside a cluster boundary (para. 0187“At decision block 1620, the classifier model 324 makes a decision based on comparing the input feature vector to the cluster boundaries. If the input feature vector does not fall inside any cluster boundary, and thus falls outside all the cluster boundaries 1010, the method 1600 proceeds to block 1625.”), the cluster boundaries having been established via prior training (Fig. 14, 1480). Determining that this input utterance falls outside all cluster boundaries as discloses in 1087 reads on an “input utterance corresponding to an unhandled utterance”. Furthermore, determining whether or not the input feature vector falls within one of the cluster boundaries (whether the input feature vector is similar enough/close enough in the feature space to the centroid of the cluster, see Fig. 17), and further determining that the input feature vector falls outside every cluster (Fig. 16, ‘No’ branch) amounts to predicting a cluster for an input feature vector that falls outside all cluster boundaries based on similarity to one of the existing clusters. Hence, Applicant’s arguments are not persuasive. Claim Rejections - 35 USC § 101 3. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1, “A method” is recited, which is directed to one of the four statutory categories of invention (process) (Step 1: YES). However, the claims limitations, under their broadest reasonable interpretation, recite mental processes or mathematical concepts which fall into the category of abstract idea (Step 2A Prong 1: YES). The following limitations, under their broadest reasonable interpretation, recite mental processes or mathematical concepts: “receiving an input utterance”: a person reads an utterance (e.g. a sentence) “in response to receiving the input utterance …in which each class of a plurality of classes …has a target embedding, determining that the input utterance corresponds to an unhandled utterance”: a person determines that an utterance is unhandled (e.g. not associated with a particular class/topic of a plurality of predefined classes having target embeddings) “generating…an utterance embedding vector … based on the input utterance, the utterance embedding vector representing the input utterance relative to the plurality of classes”: a person reads an input utterance, and then creates a vector reflecting an expected class the input belongs to, and then writes the vector down using pen and paper. “obtaining…a predicted class within the plurality of classes for the input utterance, the predicted class identified based on a similarity of the predicted class to an expected class for a similar utterance, the expected class within the plurality of classes, the similarity of the predicted class to the expected class determined based on a spatial parameter representing a distance of the utterance embedding vector for the predicted class from the target embedding vector associated with the expected class”: a person obtains a predicted class for the input by using a similarity of the predicted class to the expected class. The person then selects the class with the highest similarity. Determining the spatial parameter representing a distance of the utterance embedding vector to the target embedding vectors recites a mathematical calculation in words, and thus falls under the abstract idea grouping of mathematical concepts. “passing the similarity between the predicted class and the expected class to a loss function and, … updating parameters of the pre-trained large language model mapping the input utterance to the plurality of classes, including the predicted class and the expected class”: Using a loss function to update parameters of a pre-trained large language model recites a mathematical calculation in words, and thus falls under the abstract idea grouping of mathematical concepts. Claim 1 does not contain any additional elements which integrate the judicial exception into a practical application (Step 2A Prong 2: NO). The only additional limitations are “using a pre-trained language model operating on at least one processing device of an electronic device”, “generating, using the pre-trained large language model operating on the at least one processing device, an utterance embedding”, and “updating parameters of the pre-trained large language model…”. These limitations are recited at a high level of generality, and amount to mere instructions to implement the judicial exception using a generic computer. Even when viewed in combination, mere instructions to implement the judicial exception using a generic computer are not enough to integrate the judicial exception into a practical application as they do not impose any meaningful limits on practicing the abstract idea. Accordingly, claim 1 is directed to an abstract idea. Claim 1 does not contain any additional elements which amount to significantly more than the judicial exception (Step 2B: NO). As discussed above with respect to integration into a practical application, the additional limitations amount to mere instructions to implement the judicial exception using a generic computer. Even when viewed in combination, mere instructions to implement the judicial exception using a generic computer are not enough to allow the claim to amount to significantly more than the judicial exception as they do not provide an inventive concept. Therefore, claim 1 is not patent eligible. Regarding dependent claims 2-8, “The method” is recited, which is directed to one of the four statutory categories of invention (process) (Step 1: YES). However, the claims limitations, under their broadest reasonable interpretation, recite mental processes which fall into the category of abstract idea (Step 2A Prong 1: YES). Claims 2-8 contain the mental processes of claim 1 due to their dependence on claim 1. Additionally, the following limitations, under their broadest reasonable interpretation, recite mental processes or mathematical concepts, both of which fall under the category of abstract idea: Claim 2: “obtaining training data comprising a plurality of historical embedding vectors representing historical utterances labeled with one or more classes; and for each class of the plurality of classes, (i) determining a mean or a median of embedding vectors in that class and (ii) identifying one of the historical embedding vectors closest to the mean or the median as the target embedding vector for that class”: a person obtains training data of embedding vectors for one or more classes. A person then calculates a mean of embedding vector each class using pen and paper, and then selects a particular vector as the target vector using a distance metric to find the closest vector to the mean. Claim 3: “a distance of the utterance embedding vector to the spatial parameter of a specified one of the plurality of classes comprises a distance of the utterance embedding vector to a threshold boundary of the specified one of the plurality of classes; a positive value of the distance corresponds to the utterance embedding vector being inside the threshold boundary; and a negative value of the distance corresponds to the utterance embedding vector being outside the threshold boundary.”: a person calculates a distance (e.g. Euclidean distance) using pen and paper between a vector and a threshold boundary, with the value being made positive if it is inside the boundary, and the value being made negative if it is outside the boundary. Claim 4: “the distance of the utterance embedding vector to the spatial parameter of a specified one of the plurality of classes comprises a distance of the utterance embedding vector to a class target of the specified one of the plurality of classes; and the distance of an utterance embedding to an unhandled class comprises a smooth negative maximum of distances from the utterance embedding vector to the class targets of the plurality of classes.”: a person calculates a distance (e.g. Euclidean distance) using pen and paper between a vector and a class target vector for a class. Calculating a smooth negative maximum of distances from the utterance embedding vector to the class targets is a mathematical concept. Claim 5: “wherein the smooth negative maximum of distances is calculated using a trainable vector.”: using a trainable vector to calculate the smooth negative maximum is a mathematical concept. Claim 6: “the utterance embedding vector for the input utterance is mapped to a number of dimensions equal to the number of classes, each dimension representing a single class; a positive value of a specified dimension indicates a positive label for the corresponding class; and negative values of all dimensions representing the plurality of classes indicate an unhandled label.”: a person writes down a vector with “n” dimensions, with n=number of classes, and writes down a positive value (e.g. +1) for classes that the input is believed to fall into, a negative value (e.g. -1) for classes that the input is not believed to fall into, with all negative values meaning that the input does not fall into any class, and thus is unhandled. Claim 7: “inputting the input utterance to the … model, the input utterance comprising multiple tokens; outputting, by the … model, a token embedding vector for each of the tokens of the input utterance; and pooling the token embedding vectors to generate the utterance embedding vector.”: a person takes an input of multiple tokens (e.g. a sentence with multiple words), and uses a language model as a set of rules to write down a vector corresponding to each word, and then combines the individual vectors to write down an utterance embedding vector using pen and paper. Claim 8: “the target embedding vectors include multiple training utterances representing an unhandled class; and the predicted class associated with the input utterance is obtained based on distances of the utterance embedding vector to (i) the spatial parameters representing the plurality of classes and (ii) additional spatial parameters representing the unhandled class.”: a person includes unhandled inputs, and predicts what class a vector belongs to using a distance (e.g. Euclidean distance) from a spatial parameter (e.g. centroid of a class), and decides what class it belongs to (either one of the plurality of classes or the unhandled class) based on the shortest distance to one of the spatial parameters. Claims 2-8 do not contain any additional elements which integrate the judicial exception into a practical application (Step 2A Prong 2: NO). The only additional limitations are those from claim 1, as discussed above for integration of the judicial exception into a practical application for claim 1, “inputting…to the pre-trained large language model” (claim 7), and “outputting, by the pre-trained large language model” (claim 7). As discussed above, these limitations are recited at a high level of generality, and amount to mere instructions to implement the judicial exception using a generic computer. Furthermore, the limitations in claim 7 further recite mere instructions to implement the judicial exception using a generic computer. Even when viewed in combination, mere instructions to implement the judicial exception using a generic computer are not enough to integrate the judicial exception into a practical application as they do not impose any meaningful limits on practicing the abstract idea. Accordingly, claims 2-8 are directed to an abstract idea. Claims 2-8 do not contain any additional elements which amount to significantly more than the judicial exception (Step 2B: NO). As discussed above with respect to integration into a practical application, the additional limitations amount to mere instructions to implement the judicial exception using a generic computer. Even when viewed in combination, mere instructions to implement the judicial exception using a generic computer are not enough to allow the claim to amount to significantly more than the judicial exception as they do not provide an inventive concept. Therefore, claims 2-8 are not patent eligible. Regarding claim 9, “A electronic device” is recited, which is directed to one of the four statutory categories of invention (machine) (Step 1: YES). However, the claims limitations, under their broadest reasonable interpretation, recite limitations similar to those recited in claim 1, and thus also recite mental processes (Step 2A Prong 1:YES; see explanation above for claim 1). Claim 9 does not contain any additional elements which integrate the judicial exception into a practical application (Step 2A Prong 2: NO). The only additional limitations are those discussed with regards to claim 1, and “An electronic device comprising: at least one processing device configured to:”. These limitations are recited at a high level of generality, and amount to mere instructions to implement the judicial exception using a generic computer. Even when viewed in combination, mere instructions to implement the judicial exception using a generic computer are not enough to integrate the judicial exception into a practical application as they do not impose any meaningful limits on practicing the abstract idea. Accordingly, claim 9 is directed to an abstract idea. Claim 9 does not contain any additional elements which amount to significantly more than the judicial exception (Step 2B: NO). As discussed above with respect to integration into a practical application, the additional limitations amount to mere instructions to implement the judicial exception using a generic computer. Even when viewed in combination, mere instructions to implement the judicial exception using a generic computer are not enough to allow the claim to amount to significantly more than the judicial exception as they do not provide an inventive concept. Therefore, claim 9 is not patent eligible. Regarding dependent claims 10-16, “The electronic device” is recited, which is directed to one of the four statutory categories of invention (machine) (Step 1: YES). However, the claims limitations, under their broadest reasonable interpretation, recite limitations similar to claims 2-8 respectively, and thus also recite mental processes and mathematical concepts (Step 2A Prong 1: YES; see explanation above for claims 2-8). Claims 10-16 do not contain any additional elements which integrate the judicial exception into a practical application (Step 2A Prong 2: NO). The only additional limitations are those from claim 9, as discussed above for integration of the judicial exception into a practical application for claim 9. As discussed above, these limitations are recited at a high level of generality, and amount to mere instructions to implement the judicial exception using a generic computer. Even, when viewed in combination, mere instructions to implement the judicial exception using a generic computer are not enough to integrate the judicial exception into a practical application as they do not impose any meaningful limits on practicing the abstract idea. Accordingly, claims 10-16 are directed to an abstract idea. Claims 10-16 do not contain any additional elements which amount to significantly more than the judicial exception (Step 2B: NO). As discussed above with respect to integration into a practical application, the additional limitations amount to mere instructions to implement the judicial exception using a generic computer. Even when viewed in combination, mere instructions to implement the judicial exception using a generic computer are not enough to allow the claim to amount to significantly more than the judicial exception as they do not provide an inventive concept. Therefore, claims 10-16 are not patent eligible. Regarding claim 17, “A non-transitory machine-readable medium” is recited, which is directed to one of the four statutory categories of invention (article of manufacture) (Step 1: YES). However, the claims limitations, under their broadest reasonable interpretation, recite limitations similar to those in claim 1, and thus also recites mental processes which fall into the category of abstract idea (Step 2A Prong 1: YES; see explanation above for claim 1). Claim 17 does not contain any additional elements which integrate the judicial exception into a practical application (Step 2A Prong 2: NO). The only additional limitations are those discussed with respect to claim 1 and “A non-transitory machine-readable medium containing instructions that when executed cause at least one processor of an electronic device to:”. These limitations are recited at a high level of generality, and amount to mere instructions to implement the judicial exception using a generic computer. Even when viewed in combination, mere instructions to implement the judicial exception using a generic computer are not enough to integrate the judicial exception into a practical application as they do not impose any meaningful limits on practicing the abstract idea. Accordingly, claim 17 is directed to an abstract idea. Claim 17 does not contain any additional elements which amount to significantly more than the judicial exception (Step 2B: NO). As discussed above with respect to integration into a practical application, the additional limitations amount to mere instructions to implement the judicial exception using a generic computer. Even when viewed in combination, mere instructions to implement the judicial exception using a generic computer are not enough to allow the claim to amount to significantly more than the judicial exception as they do not provide an inventive concept. Therefore, claim 17 is not patent eligible. Regarding dependent claims 18-20, “The non-transitory machine-readable medium” is recited, which is directed to one of the four statutory categories of invention (article of manufacture) (Step 1: YES). However, the claims limitations, under their broadest reasonable interpretation, recite limitations similar to claims 2-4 respectively, and thus also recite mental processes or mathematical concepts (Step 2A Prong 1: YES; see explanation above for claims 2-4). Claims 18-20 do not contain any additional elements which integrate the judicial exception into a practical application (Step 2A Prong 2: NO). The only additional limitations are those from claim 17, as discussed above for integration of the judicial exception into a practical application for claim 17. As discussed above, these limitations are recited at a high level of generality, and amount to mere instructions to implement the judicial exception using a generic computer. Even when viewed in combination, mere instructions to implement the judicial exception using a generic computer are not enough to integrate the judicial exception into a practical application as they do not impose any meaningful limits on practicing the abstract idea. Accordingly, claims 18-20 are directed to an abstract idea. Claims 18-20 do not contain any additional elements which amount to significantly more than the judicial exception (Step 2B: NO). As discussed above with respect to integration into a practical application, the additional limitations amount to mere instructions to implement the judicial exception using a generic computer. Even when viewed in combination, mere instructions to implement the judicial exception using a generic computer are not enough to allow the claim to amount to significantly more than the judicial exception as they do not provide an inventive concept. Therefore, claims 18-20 are not patent eligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 4. Claims 1, 7, 9, 15, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Pan et al. (US PGPUB No. 2021/0083994, hereinafter Pan) in view of Arik et al. (US PGPUB No. 2021/0279517, hereinafter Arik) and further in view of Xu et al. (NPL Unsupervised Out-of-Domain Detection via Pre-trained Transformers, hereinafter Xu). Regarding claim 1, Pan discloses receiving an input utterance (para. 0184 “At block 1605 of the method 1600, the master bot 114 accesses an input utterance 303 that has provided as user input 110. For instance, in some embodiments, a user may provide user input 110 in the form of speech input, and the digital assistant 106 may convert that user input 110 into a textual input utterance 303 for use by the master bot 114.”); in response to receiving the input utterance, using a … language model operating on at least one processing device of an electronic device (para. 0185 “At block 1610, the master bot 114 may generate an input feature vector from the input utterance 303 accessed at block 1605. More specifically, in some embodiments, the master bot 114 causes the classifier model 324 of the master bot 114 to generate the input feature vector from the input utterances 303. The input feature vector may describe and represent the input utterance 303. Various techniques are known for converting a sequence of words, such as an input utterance, into a feature vector, and one or more of such techniques may be used. For instance, the training system 350 may, but need not, use a one-hot encoding or some other encoding to encode the input utterance 303 as a corresponding input feature vector. However, an embodiment of the master bot 114 uses the same technique as was used to generate training feature vectors 620 from training utterances 615 when training the classifier model 324.”) in which each class of a plurality of classes for the …language model has a target embedding vector (Fig. 15, “Centroid 1510a” and “Centroid 1510b”; para. 0158 “…the training system 350 may set (i.e., initialize) a count, which is a quantity of clusters to be generated. In some embodiments, for example, the count may initially be set to the quantity n, where n is the total number of intents”; para. 0160 “Specifically, at block 1425, the training system 3509 may determine respective centroid locations (i.e., a respective location for each centroid) for the various clusters to be generated in this iteration; the quantity of centroid locations is equal to the count determined at block 1415.”), determining that the input utterance corresponds to an unhandled utterance (para. 0187 “At decision block 1620, the classifier model 324 makes a decision based on comparing the input feature vector to the cluster boundaries. If the input feature vector does not fall inside any cluster boundary, and thus falls outside all the cluster boundaries 1010, the method 1600 proceeds to block 1625.”) and generating, using the …language model operating on the at least one processing device, an utterance embedding vector based on the input utterance (para. 0185 “At block 1610, the master bot 114 may generate an input feature vector from the input utterance 303 accessed at block 1605. More specifically, in some embodiments, the master bot 114 causes the classifier model 324 of the master bot 114 to generate the input feature vector from the input utterances 303. The input feature vector may describe and represent the input utterance 303.”), the utterance embedding vector representing the input utterance relative to the plurality of classes (Fig. 17, feature vector represents input utterance in feature space along with clusters); obtaining, using the at least one processing device, a predicted class within the plurality of classes for the input utterance (step 1620 compares input feature vector with plurality of class boundaries; the feature vector falling within a particular boundary (‘Yes’ branch) predicts a class associated with the cluster (step 1630), the feature vector falling outside every boundary (‘No’) corresponds to an unhandled class: para. 0187 “At decision block 1620, the classifier model 324 makes a decision based on comparing the input feature vector to the cluster boundaries. If the input feature vector does not fall inside any cluster boundary, and thus falls outside all the cluster boundaries 1010, the method 1600 proceeds to block 1625.”), the predicted class identified based on a similarity of the predicted class to an expected class for a similar utterance, the expected class within the plurality classes (the step of step 1620 of determining if the input feature vector falls within the boundary amounts to a similarity between the input feature vector and each class cluster), the similarity of the predicted class to the expected class determined based on a spatial parameter representing a distance of the utterance embedding vector for the predicted class from the target embedding vector associated with the expected class (in a particular embodiment, boundary is a hypersphere with a radius extending from a target embedding vector (centroid); determining the similarity (whether input feature vector falls within boundary) amounts to determining if input target feature is within a certain distance from the centroid: para. 0175 “In some embodiments, the boundary 1010 for a cluster is defined to center on the centroid of the cluster and to include all the training feature vectors 620 assigned to the cluster. In some embodiments, for instance, the boundary 1010 of a cluster is a hypersphere (e.g., a circle or a sphere) having its center at the centroid. In some embodiments, the radius of the boundary 1010 may be a margin value (i.e., a padding amount) plus the larger of (1) the maximum distance from the center to the training feature vector 620, in that cluster, that is farthest from the centroid, or (2) the mean of the respective distances to the centroid from the training feature vectors 620 in the cluster, plus three times the standard deviation of such distances.”; para. 0187 “At decision block 1620, the classifier model 324 makes a decision based on comparing the input feature vector to the cluster boundaries. If the input feature vector does not fall inside any cluster boundary, and thus falls outside all the cluster boundaries 1010, the method 1600 proceeds to block 1625.”). Pan does not specifically disclose: passing the similarity between the predicted class and the expected class to a loss function and, using the at least one processing device, updating parameters of the pre-trained…language model mapping the input utterance to the plurality of classes, including the predicted class and the expected class. Arik teaches passing the similarity between the predicted class and the expected class to a loss function (para. 0024 “The classification model 210 further generates a respective query encoding 212Q, h.sub.i, for each training example in the query set of training examples 114Q and the DBLE determines a class distance measure representing a respective distance between the query encoding 212Q and the centroid value 214 determined for each respective class. …Specifically, the DBLE employs a proto-loss for classification 215 that receives the query encoding 212Q and the centroid values 214, 214a-n determined for each of the N number of respective classes to determine/calculate the respective class distance measures, and also receives the ground-truth centroid value 212G to determine/calculate the ground-truth distance between the query encoding 212Q and the ground-truth centroid value 212G.”) and, using the at least one processing device, updating parameters of the …language model mapping the input utterance to the plurality of classes, including the predicted class and the expected class (Fig. 1, 114; para. 0020 “Each training example 114 includes a corresponding ground-truth label indicating the respective class the training example 114 belongs to.”; para. 0024 “The classification model 210 further generates a respective query encoding 212Q, h.sub.i, for each training example in the query set of training examples 114Q and the DBLE determines a class distance measure representing a respective distance between the query encoding 212Q and the centroid value 214 determined for each respective class. The DBLE also determines a ground-truth distance between the query encoding 212Q and the ground-truth centroid value 212G associated with the corresponding training example in the query set of training examples 114Q and updates parameters of the classification model 210 based on the class distance measure and the ground-truth distance.”). Pan and Arik are considered to be analogous to the claimed invention as they both are in the same field of classification. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pan to incorporate the teachings of Arik in order to pass a similarity between the predicted and expected classes to a loss function and to update parameters of the language model mapping the input utterance to the plurality of classes including the predicted class and the expected class. Doing so would be beneficial, as utilizing the distance-based Learning from Errors (DBLE) framework taught in order to update the model would lead to a well-calibrated model (Arik, para. 0019). Pan in view of Arik discloses use of a language model (see above claim mapping); however, Pan in view of Arik does not specifically disclose the use of a pre-trained large language model. Xu teaches generating utterance embedding vector using a pre-trained large language model (pg. 3.1, “After pretraining, we can obtain a BERT/RoBERTa model f with L layers. We denote fl(x) …as the d-dimensional feature embeddings corresponding to the l-th layer for input x…”; pg. 2, section 2 “Assume that we have a collection of text inputs Dn := {xi}ni=1…”). Xu further teaches updating parameters of the pre-trained large language model (pg. 4, section 3.2 “Feature fine-tuning”). Pan and Arik are considered to be analogous to the claimed invention as they are all in the same field of classification. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pan in view of Arik to incorporate the teachings of Xu in order to specifically utilize a pre-trained large language model for the method taught in Pan in view of Arik. Doing so would be beneficial, as pre-trained large language models, such as BERT and its variants, contain features which can be leveraged for OOD detection (Xu, pg. 3, 1st para.). Regarding claim 7, Pan in view of Arik and further in view of Xu discloses inputting the input utterance to the pre-trained large language model (Pan, para. 0118 “As mentioned above, the classifier model 324 may use feature vectors when determining whether an input utterances if unrelated, or related, any available skill bots 116.”; Xu, pre-trained large language learning model), the input utterance comprising multiple tokens (Pan, para. 0057 “An utterance can be a fragment, a sentence, multiple sentences, one or more words, one or more questions, combinations of the aforementioned types, or the like.”); outputting, by the pre-trained large language model, a token embedding vector for each of the tokens of the input utterance (Pan, para. 0119 “The concept of a feature vector is based on the concept of word embeddings. Generally, word embedding is a type of language modeling wherein words are mapping to corresponding vectors. A particular word embedding may map words that semantically similar to similar regions of a vector space, such that similar words are close together in the vector space and dissimilar words are far apart. A simple example of a word embedding uses a “one hot” encoding, in which each word in a dictionary is mapped to a vector with a quantity of dimensions equal to the size of the dictionary, such that the vector has a value of 1 in a dimension corresponding to the word itself and a value of zero in all other dimensions. For example, the first two words of the sentence “Go intelligent bot service artificial intelligence, Oracle” could be represented using the following “one hot” encoding:”; Xu, pre-trained large language model); and pooling the token embedding vectors to generate the utterance embedding vector (para. 0119 “Feature vectors can be used to represents words, sentences, or various types of phrases. Given the above simple example of a word embedding, a corresponding feature vector might map a series of words, such as an utterance, to a feature vector that is an aggregate of the word embeddings of the words in the series. That aggregate may be, for instance, a sum, an average, or a weighted average.”). Regarding claim 9, claim 9 is an electronic device claim with limitations similar to the limitations of method claim 1 and is rejected under similar rationale. Additionally, Pan discloses An electronic device (Fig. 26, 2600) comprising: at least one processing device configured to (Fig. 26 “Processing Units 2632” and “2634”). Regarding claim 15, claim 15 contains limitations similar to claim 7 and is thus rejected for analogous reasons to claim 7. Regarding claim 17, claim 17 is a non-transitory machine-readable medium claim with limitations similar to the limitations of method claim 1 and is rejected under similar rationale. Additionally, Pan discloses A non-transitory machine-readable medium containing instructions that when executed cause at least one processor of an electronic device to (para. 0153 “The method 1400 depicted in FIG. 14, as well as other methods described herein, may be implemented in software (e.g., as code, instructions, or programs) executed by one or more processing units (e.g., processors or processor cores), in hardware, or in combinations thereof. The software may be stored on a non-transitory storage medium, such as on a memory device.”). 5. Claims 2, 10, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Pan in view of Arik and Xu, and in further view of Attwater et al. (US PGPUB No. 2023/0244855, hereinafter Attwater). Regarding claim 2, Pan in view of Arik and further in view of Xu discloses obtaining training data comprising a plurality of historical embedding vectors representing historical utterances labeled with one or more classes (Training data comprising vectors representing utterances determined to be in a particular class are used for computing the mean; para. 0046 “Preferably, the center vector of each class is an average of feature vectors of all samples belonging to the class. As an example, an average of feature vectors of all sampling belonging to each class is taken or used as a center vector of the class.”); and for each of the plurality of classes, (i) determining a mean or a median of embedding vectors in that class (para. 0046 “Preferably, the center vector of each class is an average of feature vectors of all samples belonging to the class. As an example, an average of feature vectors of all sampling belonging to each class is taken or used as a center vector of the class.”). Pan in view of Arik and further in view of Xu does not specifically disclose: and (ii) identifying one of the historical embedding vectors closest to the mean or the median as the target embedding vector for that class. Attwater teaches and (ii) identifying one of the historical embedding vectors closest to the mean or the median as the target embedding vector for that class (Fig. 29; para. 0160-0164 for N=1; “For example generate a summary description of a specific agent or client goal or intent from the training data for client goals 2560′, or agent intents 2560″…[0161] a. build 2902 a vector for each utterance in the group of utterances from the conversational database 2901, such as by using a deep learning model trained for generating semantic sentence vectors; [0162] b. calculate 2903 a cosine similarity of each combination of utterances in the group; [0163] c. for each utterance, calculate 2904 a mean similarity; [0164] d. select 2905 the top number N largest mean similarities, which corresponds to the utterances that better represent the cluster semantic meaning (i.e., it can be seen as the utterances closest to the centroid on the sentence vector space)”). Pan, Arik, Xu, and Attwater are all considered to be analogous to the claimed invention as Pan, Arik, and Xu are in the same field of text classification, and Attwater is in the same field of clustering text utterances. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pan in view of Arik and further in view of Xu to incorporate the teachings of Attwater in order to select a vector closest to the mean of the cluster as the target embedding vector. Doing so would be beneficial, as selecting a real example utterance as a target embedding vector would generate a target embedding vector that both captures the central trend of the data while also ensuring that the target embedding vector itself is a meaningful vector for the class in situations where the centroid does not accurately represent a real text utterance. Regarding claim 10, claim 10 has similar limitations to claim 2 and is thus rejected for analogous reasons to claim 2. Regarding claim 18, claim 18 has similar limitations to claim 2 and is thus rejected for analogous reasons to claim 2. 6. Claims 3, 11, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Pan in view of Arik and Xu, and in further view of Wang et al. (US PGPUB No. 2019/0156155, hereinafter Wang). Regarding claim 3, Pan in view of Arik and further in view of Xu discloses a distance of the utterance embedding vector to the spatial parameter of a specified one of the plurality of classes comprises a distance of the utterance embedding vector to a threshold boundary of the specified one of the plurality of classes (Fig. 15, “Boundary 1010a-b”; para. 0175 “In some embodiments, the radius of the boundary 1010 may be a margin value (i.e., a padding amount) plus the larger of (1) the maximum distance from the center to the training feature vector 620, in that cluster, that is farthest from the centroid, or (2) the mean of the respective distances to the centroid from the training feature vectors 620 in the cluster, plus three times the standard deviation of such distances. In other words, the radius may be set to radius=margin+max(max(distances),mean(distances)+3a(distances)), where distances is the set of the respective distances from the training feature vectors of the cluster to the centroid of the cluster, and where max(distances) is the maximum of that set, mean(distances) is the mean of that set, and a(distances) is the standard deviation of that set. Further, the margin value, margin, may be a margin of error and may have a value greater than or equal to zero.”); Pan in view of Arik and further in view of Xu does not specifically disclose: a positive value of the distance corresponds to the utterance embedding vector being inside the threshold boundary; and a negative value of the distance corresponds to the utterance embedding vector being outside the threshold boundary. Wang teaches a positive value of the distance corresponds to the utterance embedding vector being inside the threshold boundary (para. 0048 “In the formula (7), λ is a coefficient, and can be determined according to experience. As can be seen from the formula (7), the loss function L is in positive correlation with the intra-class distance and is in negative correlation with the inter-class distance.”); and a negative value of the distance corresponds to the utterance embedding vector being outside the threshold boundary (para. 0048 “In the formula (7), λ is a coefficient, and can be determined according to experience. As can be seen from the formula (7), the loss function L is in positive correlation with the intra-class distance and is in negative correlation with the inter-class distance.”). Pan, Arik, Xu, and Wang are considered to be analogous to the claimed invention as they are all in the same field of classification. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pan in view of Arik and further in view of Xu to incorporate the teachings of Wang in order to define a positive distance for a vector being inside the threshold boundary and a negative distance for a vector being outside the threshold boundary. Doing so would be beneficial, as it would provide a clear indication for if a particular vector belongs or does not belong to a particular class. Regarding claim 11, claim 11 contains similar limitations to claim 3, and thus is rejected for analogous reasons to claim 3. Regarding claim 19, claim 19 contains similar limitations to claim 3, and thus is rejected for analogous reasons to claim 3. 7. Claims 4-5, and 12-13, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Pan in view of Arik and further in view of Xu, and in further view of Asadi et al. (NPL An Alternative Softmax Operator for Reinforcement Learning, hereinafter Asadi). Regarding claim 4, Pan in view of Arik and further in view of Xu discloses a distance of the utterance embedding vector to the spatial parameter of a specified one of the plurality of classes comprises a distance of the utterance embedding vector to a class target of the specified one of the plurality of classes ((in a particular embodiment, boundary is a hypersphere with a radius extending from a target embedding vector (centroid); determining the similarity (whether input feature vector falls within boundary) amounts to determining if input target feature is within a certain distance from the centroid: para. 0175 “In some embodiments, the boundary 1010 for a cluster is defined to center on the centroid of the cluster and to include all the training feature vectors 620 assigned to the cluster. In some embodiments, for instance, the boundary 1010 of a cluster is a hypersphere (e.g., a circle or a sphere) having its center at the centroid. In some embodiments, the radius of the boundary 1010 may be a margin value (i.e., a padding amount) plus the larger of (1) the maximum distance from the center to the training feature vector 620, in that cluster, that is farthest from the centroid, or (2) the mean of the respective distances to the centroid from the training feature vectors 620 in the cluster, plus three times the standard deviation of such distances.”; para. 0187 “At decision block 1620, the classifier model 324 makes a decision based on comparing the input feature vector to the cluster boundaries. If the input feature vector does not fall inside any cluster boundary, and thus falls outside all the cluster boundaries 1010, the method 1600 proceeds to block 1625.”); and the distance of an utterance embedding to an unhandled class comprises…distances from the utterance embedding vector to the class targets of the plurality of classes (A determination of an utterance being unhandled is dependent on the distance of the data point from centroid. If distance is larger than the radius of the boundary of all class clusters, then the point is considered unhandled; Fig. 17, para. 0188 “FIG. 17 illustrates an example of executing this method 1600 in a case where an input feature vector 1710 falls outside all the cluster boundaries, according to some embodiments described herein. In some embodiments, the master bot 114 provides an input utterance 303 to the classifier model 324, thus causing the classifier model 324 to convert the input utterance 303 to an input feature vector 1710 and to compare the input feature vector to the cluster boundaries 1010. In the example of FIG. 17, five clusters 1110 are shown in the feature space 630; however, a greater or fewer number of clusters 1110 may be used. In this example, the input feature vector 1710 falls outside all the cluster boundaries 1010, and thus, the classifier model 324 outputs to the master bot 114 an indication that the input utterance 303 belongs to the none class 316.”). Pan in view of Arik and further in view of Xu does not specifically disclose [the distance of an utterance embedding to an unhandled class comprises] a smooth negative maximum [of distances]. Asadi teaches a smooth negative maximum (pg. 3 section 5: equation for “mellowmax” function MMw(X) is defined; pg. 4, section 5.2 defines maximization properties of “mellowmax”; Choice of parameter “ω” < 0 in the equation makes the function negative). Pan, Arik, Xu, and Asadi are considered to be analogous to the claimed invention as Pan, Arik, and Xu are in the same field of classification, and Asadi is in the same field of maximization functions. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pan in view of Arik and further in view of Xu to incorporate the teachings of Asadi in order to specifically compute a smooth negative maximum of the distances. Doing so would be beneficial, as a maximum distance of a point from in-domain data would provide a useful indication of how “out of domain” a data point is, and using the smooth negative maximum defined in Asadi to computer the distance has the benefit of being differentiable and smooth, facilitating analysis via gradient-based optimizations (NPL Kim et al., Adaptive Temperature Tuning for Mellowmax in Deep Reinforcement Learning, pg 1, section 1, first para.). Regarding claim 5, Pan in view of Arik and Xu, and further in view of Asadi discloses wherein the smooth negative maximum of distances is calculated using a trainable vector (Pan, distances are computed via trainable vectors (centroids); para. 0161 “…the training system 350 may compute the distance to each centroid location and may assign that training feature vector 620 to the centroid having the smallest such distance.”; para. 0163 “At block 1435 of FIG. 14, for the clusters determined at block 1430, the training system 350 recomputes the location of each cluster's centroid. For instance, in some embodiments, the centroid of each cluster is computed to be the average (e.g., the arithmetic mean) of the training feature vectors 620 assigned to that centroid and, thus, assigned to that cluster.”). Regarding claim 12, claim 12 contains limitations similar to claim 4 and is thus rejected for analogous reasons to claim 4. Regarding claim 13, claim 13 contains limitations similar to claim 5 and is thus rejected for analogous reasons to claim 5. Regarding claim 20, claim 20 contains limitations similar to claim 4 and is thus rejected for analogous reasons to claim 4. 8. Claims 6 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Pan in view of Arik and further in view of Xu, and in further view of NPL CS231n Convolutional Neural Networks for Visual Recognition: Linear Classification, hereinafter Stanford. Regarding claim 6, Pan in view of Arik and further in view of Xu does not specifically disclose the utterance embedding vector for the input utterance is mapped to a number of dimensions equal to the number of classes, each dimension representing a single class; a positive value of a specified dimension indicates a positive label for the corresponding class; and negative values of all dimensions representing the plurality of classes indicate an unhandled label. Stanford teaches the utterance embedding vector for the input utterance is mapped to a number of dimensions equal to the number of classes, each dimension representing a single class (Fig. on pg. 3, input vector is mapped to C dimensions, where C equals the number of classes (C=3 in the example, with classes “cat”, “dog”, and “ship”)); a positive value of a specified dimension indicates a positive label for the corresponding class (Fig. on pg. 1, a positive value for car class (data that is to the right of the “car classifier” line) indicates a positive label for the car class; “Using the example of the car classifier (in red), the red line shows all points in the space that get a score of zero for the car class. The red arrow shows the direction of increase, so all points to the right of the red line have positive (and linearly increasing) scores…)”); and negative values of all dimensions representing the plurality of classes indicate an unhandled label (Fig. on pg. 1 “Using the example of the car classifier (in red), the red line shows all points to the right of the red line have positive (and linearly increasing) scores, and all points to the left have a negative (and linearly decreasing scores)”; Points which lie on the opposite side of all of the classifier line indicate classes of data that are unhandled by the classifier (e.g., lower left image of a cat is unhandled, as it is not classified as an “airplane”, “car”, or a “deer”)). Pan, Arik, Xu, and Stanford are considered to be analogous to the claimed invention as they are all in the field of classification. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pan in view of Arik and further in view of Xu to incorporate the teachings of Stanford in order to map the utterance vector to dimensions equal to the number of classes, with a positive label indicating a positive label of the class, and all negative labels indicating that a data point is unhandled. Doing so would be beneficial, as the ability for the language model to identify a sample as being unhandled prevents the model from inaccurately labeling anomalous data with one of the pluralities of normal class label, improving accuracy. Regarding claim 14, claim 14 contains limitations similar to claim 6 and is thus rejected for analogous reasons to claim 6. 9. Claims 8 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Pan in view of Arik and further in view of Xu, and in further view of Min (US PGPUB No. 2019/0050395). Regarding claim 8, Pan in view of Arik and further in view of Xu discloses the predicted class associated with the input utterance is obtained based on distances of the utterance embedding vector to (i) the spatial parameters representing the plurality of classes ((in a particular embodiment, boundary is a hypersphere with a radius extending from a target embedding vector (centroid); determining the predicted class is based on distances of the input target feature to each spatial parameter (centroid) of the plurality classes: para. 0175 “In some embodiments, the boundary 1010 for a cluster is defined to center on the centroid of the cluster and to include all the training feature vectors 620 assigned to the cluster. In some embodiments, for instance, the boundary 1010 of a cluster is a hypersphere (e.g., a circle or a sphere) having its center at the centroid. In some embodiments, the radius of the boundary 1010 may be a margin value (i.e., a padding amount) plus the larger of (1) the maximum distance from the center to the training feature vector 620, in that cluster, that is farthest from the centroid, or (2) the mean of the respective distances to the centroid from the training feature vectors 620 in the cluster, plus three times the standard deviation of such distances.”; para. 0187 “At decision block 1620, the classifier model 324 makes a decision based on comparing the input feature vector to the cluster boundaries. If the input feature vector does not fall inside any cluster boundary, and thus falls outside all the cluster boundaries 1010, the method 1600 proceeds to block 1625.”). Pan in view of Arik and further in view of Xu does not specifically disclose that the target embedding vectors include multiple training utterances representing an unhandled class and that [the predicted class associated with the input utterance is obtained based on distances of the utterance embedding vector to…(ii) additional spatial parameters representing the unhandled class. Min teaches the target embedding vectors include multiple training utterances representing an unhandled class (para. 0087 “…a domain determination training apparatus applies, to an autoencoder 801, first training features 804 indicating in-domain sentences 802. The domain determination training apparatus applies, to the autoencoder 801, second training features 805 indicating out-of-domain sentences 803.”) and obtained based on distances of the utterance embedding vector to…(ii) additional spatial parameters representing the unhandled class (Fig. 3, para. 0067 “In an example, an autoencoder is trained such that embedded features generated from features indicating out-of-domain sentences are closer to the location 303. The location 303 may be determined variously based on design intention, and defined as, for example, an original point in the embedding space.”; para. 0068 ”When a distance between a location 301 of an embedded feature and the location 303 is less than a threshold distance, the domain determining apparatus determines that the input sentence is the out-of-domain sentence between the in-domain sentence and the out-of-domain sentence. When the distance between the location 301 of the embedded feature and the location 303 is greater than the threshold distance, the domain determining apparatus determines that the input sentence is the in-domain sentence between the in-domain sentence and the out-of-domain sentence. The threshold distance may be defined in advance, or obtained through training.”). Pan, Arik, Xu, and Min are considered to be analogous to the claimed invention as they both are in the same field of classification. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Pan in view of Arik and further in view of Xu to incorporate the teachings of Min in order to incorporate distances of an utterance embedding vector to spatial parameters representing the unhandled class. Doing so would be beneficial, as spatial parameters capturing tendencies in the unhandled data can be useful for determining if future data belongs to the unhandled class. Regarding claim 16, claim 16 contains limitations similar to claim 8 and is thus rejected for analogous reasons to claim 8. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Stergioudis (US 2022/0201008 A1): anomaly detection using Siamese networks (Fig. 5B, para. 0129) Any inquiry concerning this communication or earlier communications from the examiner should be directed to CODY DOUGLAS HUTCHESON whose telephone number is (703)756-1601. The examiner can normally be reached M-F 8:00AM-5:00PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre-Louis Desir can be reached at (571)-272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CODY DOUGLAS HUTCHESON/Examiner, Art Unit 2659 /PIERRE LOUIS DESIR/Supervisory Patent Examiner, Art Unit 2659
Read full office action

Prosecution Timeline

Apr 19, 2023
Application Filed
Mar 21, 2025
Non-Final Rejection — §101, §103
May 21, 2025
Interview Requested
May 28, 2025
Applicant Interview (Telephonic)
May 28, 2025
Examiner Interview Summary
Jun 27, 2025
Response Filed
Jul 14, 2025
Final Rejection — §101, §103
Aug 18, 2025
Interview Requested
Aug 25, 2025
Applicant Interview (Telephonic)
Aug 25, 2025
Examiner Interview Summary
Sep 22, 2025
Response after Non-Final Action
Oct 10, 2025
Request for Continued Examination
Oct 16, 2025
Response after Non-Final Action
Jan 06, 2026
Non-Final Rejection — §101, §103
Jan 27, 2026
Interview Requested
Feb 24, 2026
Examiner Interview Summary
Feb 24, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603096
VOICE ENHANCEMENT METHODS AND SYSTEMS
2y 5m to grant Granted Apr 14, 2026
Patent 12591750
GENERATIVE LANGUAGE MODEL UNLEARNING
2y 5m to grant Granted Mar 31, 2026
Patent 12579447
TECHNIQUES FOR TWO-STAGE ENTITY-AWARE DATA AUGMENTATION
2y 5m to grant Granted Mar 17, 2026
Patent 12537018
METHOD AND SYSTEM FOR PREDICTING A MENTAL CONDITION OF A SPEAKER
2y 5m to grant Granted Jan 27, 2026
Patent 12530529
DOMAIN-SPECIFIC NAMED ENTITY RECOGNITION VIA GRAPH NEURAL NETWORKS
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
62%
Grant Probability
99%
With Interview (+47.1%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 24 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month