Prosecution Insights
Last updated: April 19, 2026
Application No. 17/534,922

TEXT CLASSIFICATION USING ONE OR MORE NEURAL NETWORKS

Non-Final OA §101§103
Filed
Nov 24, 2021
Examiner
NAULT, VICTOR ADELARD
Art Unit
2124
Tech Center
2100 — Computer Architecture & Software
Assignee
Nvidia Corporation
OA Round
3 (Non-Final)
62%
Grant Probability
Moderate
3-4
OA Rounds
3y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 62% of resolved cases
62%
Career Allow Rate
8 granted / 13 resolved
+6.5% vs TC avg
Strong +83% interview lift
Without
With
+83.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
30 currently pending
Career history
43
Total Applications
across all art units

Statute-Specific Performance

§101
29.1%
-10.9% vs TC avg
§103
40.4%
+0.4% vs TC avg
§102
7.5%
-32.5% vs TC avg
§112
21.4%
-18.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 13 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/06/2026 has been entered. Remarks This Office Action is responsive to Applicants' Amendment filed on February 06, 2026, in which claims 1-3, 7-9, 13-15, and 25-27 have been amended. No claims have been newly cancelled or added. Claims 1-18 and 25-30 are currently pending. Response to Arguments With regards to the rejections of claims 1-18 and 25-30 under 35 U.S.C. 103 as unpatentable over Huffman et al. (U.S. Patent Application Publication No. 2022/0115033), in view of Hada et al. “Codewords Detection in Microblogs Focusing on Differences in Word Use Between Two Corpora”, Examiner finds Applicant’s arguments that the claims as amended overcome the rejections to be persuasive, however the arguments are moot in view of a new grounds of rejection, necessitated by Applicant’s amendment, as presented below. With regards to the rejections of claims 1-18 and 25-30 under 35 U.S.C. 101 as directed towards abstract ideas, Applicant’s arguments that the claims as amended overcome the rejections have been considered but are not found persuasive. Applicant first argues that claim 1 is 101-eligible at least at Step 2A, Prong One of the Subject Matter Eligibility Test, due to not reciting abstract ideas. Applicant argues that MPEP 2106.04(a)(1), Example vii is analogous, which states: “Non-limiting hypothetical examples of claims that do not recite (set forth or describe) an abstract idea include:…vii. a method of training a neural network for facial detection comprising: collecting a set of digital facial images, applying one or more transformations to the digital images, creating a first training set including the modified set of digital facial images; training the neural network in a first stage using the first training set, creating a second training set including digital non-facial images that are incorrectly detected as facial images in the first stage of training; and training the neural network in a second stage using the second training set”. Examiner respectfully disagrees. Claim 1 of the instant application does not have an analogous step of application of transformations to images, nor does the training take place in two distinct stages, with the second stage depending on the results of the first. Examiner considers Example 47, Claim 2 from the July 2024 Subject Matter Eligibility Examples to be a better analogy. Steps “(d) detecting one or more anomalies in a data set using the trained ANN; and “(e) analyzing the one or more detected anomalies using the trained ANN to generate anomaly data;” analogous to the step of comparing keywords or keyphrases and the step of adding to a list of keywords or keyphrases based on a comparison, respectively, both within claim 1 of the instant application, are found to encompass evaluations, which are mental processes. Applicant further argues that claim 1 is 101-eligible at least at Step 2A, Prong Two of the Subject Matter Eligibility Test, due to integrating any recited abstract ideas into a practical application of “quickly rout[ing] information that may be critical for performance”. Examiner respectfully disagrees. MPEP 2106.04(d).III. recites: “Because a judicial exception alone is not eligible subject matter, if there are no additional claim elements besides the judicial exception, or if the additional claim elements merely recite another judicial exception, that is insufficient to integrate the judicial exception into a practical application”. MPEP 2106.04(d).I. recites: “The courts have also identified limitations that did not integrate a judicial exception into a practical application: • Merely reciting the words ‘apply it’ (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f);”. Claim 1 only recites abstract ideas, i.e. judicial exceptions, and instructions to apply the judicial exceptions with circuits and neural networks, or to update neural networks. For those reasons it cannot be considered to integrate the recited judicial exceptions into a practical application. In order to be eligible at least by integrating any recited judicial exceptions into a practical application, the claim must be amended to recite elements that are neither judicial exceptions themselves, nor mere instructions to apply, insignificant extra-solution activity, or generally linking to a technological environment or field of use. Applicant further cites Ex parte Desjardins, stating on page 14 of the Remarks: “Like the claims at issue in Desjardins, Applicant's claim recites features that results in improving the process of classifying feedback automatically without the need for human annotators”. Examiner respectfully disagrees that Ex parte Desjardins is analogous to the instant application. Page 2 of “Advance notice of change to the MPEP in light of Ex Parte Desjardins” states: “In Step 2A Prong Two, the ARP then determined that the specification identified improvements as to how the machine learning model itself operates, including training a machine learning model to learn new tasks while protecting knowledge about previous tasks to overcome the problem of ‘catastrophic forgetting’ encountered in continual learning systems”. The claims of the instant application do not recite improvements to the operation of a machine learning model itself or to the training algorithm of a machine learning model, but rather to the feedback data used for updating a machine learning model. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-18 and 25-30 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract ideas without significantly more. Regarding claim 1, Step 1 - “Is the claim to a process, machine, manufacture or composition of matter?” Yes, the claim is directed towards a machine. Step 2A, Prong 1 - “Is the claim directed to a law of nature, a natural phenomenon (product of nature) or an abstract idea?”: The limitation of comparing a keyword or a keyphrase in unannotated user feedback with a previously provided set of keywords or keyphrases; recites a judgement of the keyword or keyphrase and the set of keywords or keyphrases, which is a mental process, which is an abstract idea, regardless of if it’s performed on a generic computer. The limitation of adding the keyword or the keyphrase in the unannotated user feedback to the set of keywords or keyphrases according to a result of the comparison; recites an evaluation of the keyword or keyphrase, the set of keywords or keyphrases, and the comparison, which is a mental process, which is an abstract idea, regardless of if it’s performed on a generic computer. The limitation of to predict a label to include in the information generated using the one or more neural networks recites a judgement of a label, which is a mental process, which is an abstract idea, regardless of if it’s performed on a generic computer. Step 2A, Prong 2 - “Does the claim recite additional elements that integrate the judicial exception into a practical application?”: The limitation of one or more circuits to use one or more neural networks to generate information about a computer program based, at least in part on: recites mere instructions to apply judicial exceptions with generic computer components and generic machine learning models, MPEP 2106.05(d) and 2106.05(f). The limitation of updating the one or more neural networks using the set of keywords or key phrases, including the added keyword or keyphrase, recites mere instructions to apply judicial exceptions to update a neural network, MPEP 2106.05(d) and 2106.05(f). Step 2B - “Does the claim recite additional elements that amount to significantly more than the judicial exception?”: The limitation of one or more circuits to use one or more neural networks to generate information about a computer program based, at least in part on: recites mere instructions to apply judicial exceptions with generic computer components and generic machine learning models, MPEP 2106.05(f). The limitation of updating the one or more neural networks using the set of keywords or key phrases, including the added keyword or keyphrase, recites mere instructions to apply judicial exceptions to update a neural network, MPEP 2106.05(f). Therefore, claim 1 is found to be ineligible subject matter under 35 U.S.C. 101. Regarding claim 2, Claim 2 adds the additional limitations to claim 1: converting the keyword or keyphrase into one or more embeddings; recites mere instructions to apply conversion into embeddings to keywords/phrases in user feedback, which does not integrate any recited judicial exceptions into a practical application, nor is it significantly more than the recited judicial exceptions, MPEP 2106.05(d) and 2106.05(f). and calculating a similarity score between the one or more embeddings and initial embeddings for the set of keywords or keyphrases recites a mathematical calculation of a similarity score, which is a mathematical concept, which is an abstract idea. Therefore, claim 2 is found to be ineligible subject matter under 35 U.S.C. 101. Regarding claim 3, Claim 3 adds the additional limitations to claim 2: wherein the one or more circuits are further to select the label for one embedding of the one or more embeddings corresponding to the keyword or keyphrase having a highest similarity score, above at least a minimum similarity threshold, with respect to the one embedding recites a judgement of a label, which is a mental process, which is an abstract idea, regardless of if it’s performed on a generic computer. Therefore, claim 3 is found to be ineligible subject matter under 35 U.S.C. 101. Regarding claim 4, Claim 4 adds the additional limitations to claim 1: wherein the one or more neural networks include a classifier network trained using the set of keywords or keyphrases for each of a set of categories recites mere additional details on the neural networks used to apply the abstract ideas within claim 1, without changing that abstract ideas performed on neural networks are equivalent to abstract ideas performed on generic computers, which does not integrate the abstract ideas into a practical application, nor is it significantly more than the recited judicial exceptions, MPEP 2106.05(d) and 2106.05(f). Therefore, claim 4 is found to be ineligible subject matter under 35 U.S.C. 101. Regarding claim 5, Claim 5 adds the additional limitations to claim 4: wherein the one or more circuits are further to process the unannotated user feedback using the classifier network to identify a classification for the unannotated user feedback, recites an evaluation of the unannotated user feedback to identify a classification, which is a mental process, which is an abstract idea, regardless of if it’s performed on a generic computer. wherein the information about the computer program is based at least in part upon the classification recites an evaluation of the information, which is a mental process, which is an abstract idea, regardless of if it’s performed on a generic computer. Therefore, claim 5 is found to be ineligible subject matter under 35 U.S.C. 101. Regarding claim 6, Claim 6 adds the additional limitations to claim 5: wherein the one or more circuits are further to retrain the classifier network using the identified classification for the unannotated user feedback recites mere instructions to apply retraining to the classifier network, which does not integrate any recited judicial exceptions into a practical application, nor is it significantly more than the recited judicial exceptions, MPEP 2106.05(d) and 2106.05(f). Therefore, claim 6 is found to be ineligible subject matter under 35 U.S.C. 101. Regarding claims 7-12, Claims 7-12 disclose a system with at least one processor that implements the function of the processor of claims 1-6, respectively, with substantially the same limitations. Therefore the same analysis and rejection applied to claims 1-6 applies to claims 7-12. Therefore, claims 7-12 are found to be ineligible subject matter under 35 U.S.C. 101. Regarding claims 13-18, Claims 13-18 disclose a method that implements the function of the processor of claims 1-6, respectively, with substantially the same limitations. Therefore the same analysis and rejection applied to claims 1-6 applies to claims 13-18. Therefore, claims 13-18 are found to be ineligible subject matter under 35 U.S.C. 101. Regarding claims 25-30, Claims 25-30 disclose a information generation system with at least one processor that implements the function of the processor of claims 1-6, respectively, with substantially the same limitations. Therefore the same analysis and rejection applied to claims 1-6 applies to claims 25-30. Therefore, claims 25-30 are found to be ineligible subject matter under 35 U.S.C. 101. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-18 and 25-30 are rejected under 35 U.S.C. 103 as being unpatentable over Huffman et al. (U.S. Patent Application Publication No. 2022/0115033), hereinafter Huffman, in view of Hada et al. “Codewords Detection in Microblogs Focusing on Differences in Word Use Between Two Corpora”, hereinafter Hada, further in view of Manchanda et al. (U.S. Patent Application Publication No. 2021/0182709), hereinafter Manchanda. Regarding claim 1, Huffman teaches A processor, comprising: one or more circuits to use one or more neural networks to generate information about a computer program based, at least in part on: ((Huffman [0043]) “In various embodiments, the stage is a logical or abstract entity defined by its interface: it has an input (some speech) and two outputs (filtered speech and discarded speech) (however, it may or may not have additional inputs—such as session context, or additional outputs—such as speaker age estimates), and it receives feedback from later stages (and may also provide feedback to earlier stages). These stages are, of course, physically implemented—so they're typically software/code (individual programs, implementing logic such as Digital Signal Processing, Neural Networks, etc.—or combinations of these), running on hardware such as general purposes computers (CPU, or GPU)”, hardware such as CPUs correspond to processors and circuits) … to predict a label (Huffman [0049]) “For example, the first stage may receive 60-seconds of speech, and the first stage may be configured to analyze the speech in 20-second intervals. For example, the first 20-second chunk may not have a likelihood of being toxic and may be discarded. The second 20-second chunk may meet a threshold likelihood of being toxic, and therefore, may be forwarded to the subsequent stage”, Examiner notes: toxic or not toxic is a label)) to include in the information ((Huffman [0122]) “At step 354 the training data for the various stages 115 are updated. Specifically, the training data for the first stage 115 is updated using the positive determinations of toxicity and the negative determinations of toxicity from the second stage 115”) generated using the one or more neural networks ((Huffman [0095]) “In the present example, the first stage 115 runs machine learning 215 (e.g., a neural network on the speaker device 120)”) Hada teaches the following further limitations that Huffman does not teach: comparing a keyword or a keyphrase in unannotated user feedback with a previously provided set of keywords or keyphrases; (Hada Pg. 3, Fig. 4 shows an outline of a system for detecting keywords or keyphrases that are euphemisms or codewords for keywords or keyphrases in a list or are themselves in the list, with the list being generated from words within a ‘bad’ corpus of semi-structured user-generated microblogs (tweets), (Hada Pg. 3) “We hypothesized that words used in illegal negotiations are used in the same sense as their analogs. Therefore, we speculated that unknown codewords might appear as words similar to known codewords in a codeword corpus. Using data obtained from Twitter, we prepared a set of tweets focused on illegal trading purposes and divided them into two groups of tweets used for malicious purposes: the Bad Corpus is a collection of tweets containing one or more words in the word list”) PNG media_image1.png 827 1253 media_image1.png Greyscale adding the keyword or the keyphrase in the unannotated user feedback to the set of keywords or keyphrases according to a result of the comparison; ((Hada Pgs. 3-4) “d) If the word, W, does not match any in the codeword list, it is possible that the word is not registered as a codeword; therefore, similar words up to the N/2 highest cosine similarity to the word W are considered. If the score is greater than or equal to the threshold, points are added to X (X=X+1)…3) If the Bad point total exceeds the threshold, it is identified as a codeword”, Hada Pg. 4, Algorithm 1 (Main) shows that a word is appended to a codeword list if identified as a codeword, high cosine similarity is a result of a comparison) PNG media_image2.png 293 506 media_image2.png Greyscale At the time of filing, one of ordinary skill in the art would have motivation to combine Huffman and Hada by taking the processor for generating information about a computer program using neural networks, based on user feedback, taught by Huffman, and having the user feedback be a keyword that is added to a set of keywords used to determine labels, and having the addition to the set be done according to a comparison between the keyword and the set of keywords, taught by Hada, as doing so allows for the collection of information to adapt to changing patterns in language, allowing the processor to be applied effectively over a longer period of time and to different circumstances, decreasing the need for costly human intervention in replacing or modifying the processor. Such a combination would be obvious. Manchanda teaches the following further limitation that neither Huffman nor Hada teach: and updating the one or more neural networks using the set of keywords or key phrases, including the added keyword or keyphrase, … ((Manchanda [0041]) “The deep learning neural network model is trained from historical interactions to find required key words from the user queries, if any keyword is not made available by the user then system forms natural language questions for the user to facilitate required information to fill gaps”, training a neural network is a method of updating a neural network) At the time of filing, one of ordinary skill in the art would have motivation to combine Huffman, Hada, and Manchanda by taking the processor for generating information about a computer program using neural networks, based on unannotated user feedback, including keywords added to a set of keywords based on a comparison in order to determine labels, jointly taught by Huffman and Hada, and having the neural networks be updated using the additional keywords, taught by Manchanda, as Manchanda teaches: (Manchanda [0042]) “Deep Neural Networks improve results with more training data”, and updating the neural network via training with additional keywords from a user, which constitutes more training data, thus causes the neural network to provide improved results. Such a combination would be obvious. Regarding claim 2, Huffman and Hada jointly teach The processor of claim 1, wherein the comparing the keyword or keyphrase in the unannotated user feedback comprises: Hada further teaches: converting the keyword or keyphrase into one or more embeddings; ((Hada Pg. 4) “4) Word embedding: After split writing, word distribution was performed using Word2vec using parameters shown in Table II”, Hada Pg. 3, Fig. 4 shows that word embeddings of user microblogs (tweets), including keywords or keyphrases within them, are created) and calculating a similarity score between the one or more embeddings and initial embeddings for the set of keywords or keyphrases ((Hada Pg. 3) “We then performed word distribution analysis on each corpus [3] and calculated the cosine similarity using gensim [18]. We defined a word with a high cosine similarity as W and referred to a set of such words as ‘Similar words’ of W. For example, the word ‘paper’ is a codeword for ‘LSD’ as a type of methamphetamine”) At the time of filing, one of ordinary skill in the art would have motivation to combine the processor jointly taught by Huffman and Hada for the parent claim of claim 2, claim 1. No new embodiments are introduced, so the reason to combine is the same as for the parent claim. Regarding claim 3, Huffman and Hada jointly teach The processor of claim 2, Hada further teaches: wherein the one or more circuits are further to select the label for one embedding of the one or more embeddings corresponding to the keyword or keyphrase having a highest similarity score, above at least a minimum similarity threshold, with respect to the one embedding ((Hada Pgs. 3-4) “1) For each word in the word list, calculate the score for each of the two corpora (Function SIMILAR). a) For each word, similar words up to the top N of the cosine similarity are searched using the pre-constructed word distribution expression model (Good Corpus, Bad Corpus) (Get similar words). In this experiment, we set N to 20. b) Match the retrieved N similar words individually against the matching list(Codeword List)…2) Calculate the difference (Diff) between the calculated Good (Cnt Good) and Bad (Cnt Bad) point totals. 3) If the Bad point total exceeds the threshold, it is identified as a codeword. If the Diff is above a certain level, the threshold value for the Bad point total decreases”, Hada Fig. 4 shows that the words are embedded before they are processed further, including for comparison) At the time of filing, one of ordinary skill in the art would have motivation to combine the processor jointly taught by Huffman and Hada for the parent claim of claim 3, claim 2. No new embodiments are introduced, so the reason to combine is the same as for the parent claim. Regarding claim 4, Huffman and Hada jointly teach The processor of claim 1, Huffman further teaches: wherein the one or more neural networks include a classifier network trained using the set of keywords or keyphrases for each of a set of categories ((Huffman, Paragraph [0011]) “a multistage content analysis system includes a first stage trained using a database having training data with positive and/or negative examples of training content for the first stage. The first stage is configured to receive speech content, and to analyze the speech content to categorize the speech content as having first-stage positive speech content and/or first-stage negative speech content”, Examiner notes: Huffman discloses a multistage content analysis system (one or more neural networks) trained using a database of positive and negative examples of toxic/non-toxic language (trained using a set of keywords/phrases in a set of categories) wherein the system includes a classifier network (stages)) At the time of filing, one of ordinary skill in the art would have motivation to combine the processor jointly taught by Huffman and Hada for the parent claim of claim 4, claim 1. No new embodiments are introduced, so the reason to combine is the same as for the parent claim. Regarding claim 5, Huffman and Hada jointly teach The processor of claim 4, Huffman further teaches: wherein the one or more circuits are further to process the unannotated user feedback using the classifier network to identify a classification for the unannotated user feedback, ((Huffman [0049]) “For example, the first stage may receive 60-seconds of speech, and the first stage may be configured to analyze the speech in 20-second intervals. For example, the first 20-second chunk may not have a likelihood of being toxic and may be discarded. The second 20-second chunk may meet a threshold likelihood of being toxic, and therefore, may be forwarded to the subsequent stage”, Examiner notes: Huffman further discloses processing unannotated user feedback (speech) in a classifier network (stage) to determine information about a program (whether or not a moderation policy has been violated) which is based on the user feedback (speech)) wherein the information about the computer program is based at least in part upon the classification ((Huffman [0053]) “The system may make an automated decision regarding speech toxicity after the final stage (i.e., whether the speech is toxic or not, and what action, if necessary, is appropriate)”) At the time of filing, one of ordinary skill in the art would have motivation to combine the processor jointly taught by Huffman and Hada for the parent claim of claim 5, claim 4. No new embodiments are introduced, so the reason to combine is the same as for the parent claim. Regarding claim 6, Huffman and Hada jointly teach The processor of claim 5, Huffman further teaches: wherein the one or more circuits are further to retrain the classifier network (Huffman [0057] “Similarly, a subsequent stage (e.g., the fourth stage) may provide feedback to a previous stage (e.g., the third stage) regarding whether the previous stage accurately determined speech to be toxic”) using the identified classification for the unannotated user feedback ((Huffman [0172]) “Although the term ‘accurately’ is used, it should be understood by those of skill in the art that accuracy here relates to the probability of speech being toxic as determined by the stage, not necessarily a true accuracy. Of course, the system is configured to train to become more and more truly accurate in accordance with the toxicity policy… In illustrative embodiments, the updated training provides decreases in false positives from the first stage”, Examiner notes: Huffman further discloses processing unannotated user feedback (speech) in a classifier network (stage) to determine information about a program (whether or not a moderation policy has been violated) which is based on the user feedback (speech)) At the time of filing, one of ordinary skill in the art would have motivation to combine the processor jointly taught by Huffman and Hada for the parent claim of claim 6, claim 5. No new embodiments are introduced, so the reason to combine is the same as for the parent claim. Regarding claims 7-12, Claims 7-12 recite a system comprising at least one processor for performing the function of the processor of claims 1-6, respectively. Specifically, claim 7 recites: A system comprising: one or more processors to [perform the function of the processor of claim 1]. Huffman teaches (Huffman [0043]) “These stages are, of course, physically implemented—so they're typically software/code (individual programs, implementing logic such as Digital Signal Processing, Neural Networks, etc.—or combinations of these), running on hardware such as general purposes computers (CPU, or GPU)”. All other limitations in claims 7-12 are substantially the same as those in claims 1-6, or broader, therefore the same rationale for rejection applies. Regarding claims 13-18, Claims 13-18 recite a method for performing the function of the processor of claims 1-6, respectively. All other limitations in claims 13-18 are substantially the same as those in claims 1-6, or broader, therefore the same rationale for rejection applies. Regarding claims 25-30, Claims 25-30 recite an information generation system comprising at least one processor for performing the function of the processor of claims 1-6, respectively. Specifically, claim 25 recites: An information generation system, comprising: one or more processors to [perform the function of the processor of claim 1]. Huffman teaches (Huffman [0043]) “The system 100 includes a plurality of stages 112-118 each configured to determine whether the speech 110, or a representation thereof, is likely to be considered toxic…These stages are, of course, physically implemented—so they're typically software/code (individual programs, implementing logic such as Digital Signal Processing, Neural Networks, etc.—or combinations of these), running on hardware such as general purposes computers (CPU, or GPU)”. All other limitations in claims 7-12 are substantially the same as those in claims 1-6, or broader, therefore the same rationale for rejection applies. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Song et al. (U.S. Patent Application Publication No. 20200057923) discloses relevant information similar to Huffman regarding embeddings, similarities, and responses. Bennani-Smires et al. “Simple Unsupervised Keyphrase Extraction using Sentence Embeddings” discloses relevant information regarding keyphrase comparison and labeling. Zhu et al. “Self-Supervised Euphemism Detection and Identification for Content Moderation” discloses relevant information regarding keyphrase comparison and labeling. Sacchi et al. “Open-Vocabulary Keyword Spotting With Audio And Text Embeddings” discloses relevant information about keyword/phrase classification using audio as well as text. Wroczynski et al. (U.S. Patent Application Publication No. 2019/0272317) discloses relevant information on keyword/keyphrase selection for detection of online violence. Any inquiry concerning this communication or earlier communications from the examiner should be directed to VICTOR A NAULT whose telephone number is (703) 756-5745. The examiner can normally be reached M - F, 12- 8. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Miranda Huang can be reached at (571) 270-7092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /V.A.N./Examiner, Art Unit 2124 /Kevin W Figueroa/Primary Examiner, Art Unit 2124
Read full office action

Prosecution Timeline

Nov 24, 2021
Application Filed
Dec 10, 2024
Non-Final Rejection — §101, §103
Apr 16, 2025
Examiner Interview Summary
Apr 16, 2025
Applicant Interview (Telephonic)
Jun 20, 2025
Response Filed
Jul 31, 2025
Final Rejection — §101, §103
Feb 06, 2026
Request for Continued Examination
Feb 10, 2026
Response after Non-Final Action
Mar 13, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579429
DEEP LEARNING BASED EMAIL CLASSIFICATION
2y 5m to grant Granted Mar 17, 2026
Patent 12566953
AUTOMATED PROCESSING OF FEEDBACK DATA TO IDENTIFY REAL-TIME CHANGES
2y 5m to grant Granted Mar 03, 2026
Patent 12561563
AUTOMATED PROCESSING OF FEEDBACK DATA TO IDENTIFY REAL-TIME CHANGES
2y 5m to grant Granted Feb 24, 2026
Patent 12468939
OBJECT DISCOVERY USING AN AUTOENCODER
2y 5m to grant Granted Nov 11, 2025
Patent 12446600
TWO-STAGE SAMPLING FOR ACCELERATED DEFORMULATION GENERATION
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
62%
Grant Probability
99%
With Interview (+83.3%)
3y 11m
Median Time to Grant
High
PTA Risk
Based on 13 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month