Prosecution Insights
Last updated: April 19, 2026
Application No. 17/436,281

METHOD AND SYSTEM FOR ASSISTING A DEVELOPER IN IMPROVING AN ACCURACY OF A CLASSIFIER

Non-Final OA §101§102
Filed
Sep 03, 2021
Examiner
TRAN, TAN H
Art Unit
2141
Tech Center
2100 — Computer Architecture & Software
Assignee
Telepathy Labs Inc.
OA Round
3 (Non-Final)
60%
Grant Probability
Moderate
3-4
OA Rounds
3y 6m
To Grant
92%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
184 granted / 307 resolved
+4.9% vs TC avg
Strong +32% interview lift
Without
With
+31.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
60 currently pending
Career history
367
Total Applications
across all art units

Statute-Specific Performance

§101
14.4%
-25.6% vs TC avg
§103
55.3%
+15.3% vs TC avg
§102
19.2%
-20.8% vs TC avg
§112
6.1%
-33.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 307 resolved cases

Office Action

§101 §102
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Continued Examination Under 37 CFR 1.114 2. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 8/8/2025 has been entered. Claims 1, 22, 29, and 34 have been amended. Claims 1, 6, 10-12, 14, 18, 20, 22-25, 27, and 29-35 remain pending in the application. Information Disclosure Statement 3. The information disclosure statement (IDS(s)) submitted on 07/14/2025 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Amendments 4. Applicant’s amendments to the claims 1, 22, and 29 have been fully considered and are not persuasive. The amendments provided to overcome the 101 rejection (abstract idea) issued in the last office action is not sufficient. The 35 U.S.C § 101 rejection is maintained. See rejection below for more details. Response to Arguments Applicant argues that Singaraju at [0070 and 0076 and elsewhere] may recite "user- selectable options [that] may include removing, adding, or modifying some training samples, and/or adding, deleting, or updating some intents," Singaraju makes it clear that this is entirely a manual process, where it is the developer that must discover and then "remove, add, or modify" each of the training examples. Examiner respectfully disagrees and notes that Singaraju teaches GUI-driven workflow in which the system presents identified problematic examples and provides user-selectable options to delete/modify training samples and/or modify/add/delete intent/labels. Under BRI, “automatically modifying” is satisfied where, after the developer selects an option in the GUI, the computer system performs the actual modification of the stored training sample/label/intent, as opposed to the developer manually editing a dataset outside the system. It is further noted that this interpretation is consistent with Applicant’s own specification (para. [0084]-[0086]). Applicant argues that Singaraju fails to teach that by "removing, adding, or modifying some training samples, and/or adding, deleting, or updating some intents," the system will then locate all the existing examples in the training set with such a deficiency and automatically update them to suppress that deficiency. Examiner respectfully disagrees and notes that the claim recites “locating existing examples” and “automatically modifying the existing examples,” but does not require locating all examples in the training set that share the relationship/deficiency. Under BRI, Singaraju’s disclosure of selecting one or more training sample pairs meeting the relationship criterion (e.g., highest similarity) and proving options to delete/modify those identified samples satisfies “locating existing examples” and “automatically modifying the existing examples.” Further, the claim does not require the order asserted by the Applicant. This interpretation is consistent with Applicant’s own specification (para. [0084]-[0086]), which discloses system-performed, automatic modification of training data following a negative input. Claim Rejections - 35 USC § 101 5. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 6, 10-12, 14, 18, 20, 22-25, 27, and 29-35 are rejected under 35 U.S.C. 101 because the claimed invention is directed to the abstract idea without significantly more. Step 1, the claims are directed to the statutory categories of a process, machine, and manufacture. Claim 1: Step 2A Prong 1, Claims 1, 22, 29 recites, in part determining at least one correlation of the one or more features with the set of classes respectively (Mathematical concepts, mathematical/statistical calculation). wherein the at least one diagnostic example requires the developer to one of validate or invalidate a correctness of the correlation produced by the at least one diagnostic example (Mental processes, human evaluation/judgment). Step 2A Prong 2, this judicial exception is not integrated into a practical application. The additional elements: a computing device, one or more processors (mere instructions to apply the exception using a generic computer component). providing the classification model including a plurality of features corresponding with a set of classes, wherein one or more features of the plurality of features correspond with at least one class of the set of classes (mere data gathering and output recited at a high level of generality, and thus are insignificant extra-solution activity). selecting the one or more features of the plurality of features (mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity). extracting one or more values for the one or more features selected (mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity). when the developer invalidates the correctness of the at least one correlation of the one or more features with the set of classes respectively produced by the at least one diagnostic example: locating existing examples in the training set having the at least one correlation of the at least one correlation of the one or more features with the set of classes respectively (mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity). automatically modifying the existing examples in the training set having the at least one correlation in the training set to suppress the at least one correlation (result-oriented, mere instructions to apply the abstract idea, and thus are insignificant extra-solution activity). Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception, either alone or in combination. The additional elements: a computing device, one or more processors (mere instructions to apply the exception using a generic computer component). providing the classification model including a plurality of features corresponding with a set of classes, wherein one or more features of the plurality of features correspond with at least one class of the set of classes (mere data gathering and output recited at a high level of generality, and thus are insignificant extra-solution activity). selecting the one or more features of the plurality of features (mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity). extracting one or more values for the one or more features selected (mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity). when the developer invalidates the correctness of the at least one correlation of the one or more features with the set of classes respectively produced by the at least one diagnostic example: locating existing examples in the training set having the at least one correlation of the at least one correlation of the one or more features with the set of classes respectively (mere data gathering recited at a high level of generality, and thus are insignificant extra-solution activity). automatically modifying the existing examples in the training set having the at least one correlation in the training set to suppress the at least one correlation (result-oriented, mere instructions to apply the abstract idea, and thus are insignificant extra-solution activity). Claims 6, 10-12, 14, 18, 20, 23-25, 27, and 30-35 provide further limitations to the abstract idea (Mathematical concepts and/or Mental processes) as rejected in claims 1, 22, 29, however, they do not disclose any additional elements that would amount to a practical application or significantly more than an abstract idea (mere data gathering and output/insignificant extra-solution activity and/or generic computer component). Claim Rejections - 35 USC § 102 6. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 7. Claims 1, 6, 10-12, 14, 18, 20, 22-25, 27, and 29-35 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Singaraju et al. (U.S. Patent Application Pub. No. US 20190103095 A1). Claim 1: Singaraju teaches a computer-implemented method for assisting a developer in improving an accuracy of a classification model (i.e. This disclosure related generally to improving quality of classification models, and more particularly, to improving quality of classification models for differentiating different end user intents by improving the quality of training samples used to train the classification models; para. [0024]), the computer implemented method comprising: providing, by a computing device, the classification model including a plurality of features corresponding with a set of classes, wherein one or more features of the plurality of features correspond with at least one class of the set of classes (i.e. a computer-implemented technique may determine a distinguishability score for each respective combination of two intents (forming a pair) within a plurality of intents that is defined by a developer of a bot system. The computer-implemented technique may then identify each pair of intents that is difficult to differentiate by a classification model trained using given training samples (e.g., user utterances) based upon the distinguishability score (e.g., F-score), such as based upon the distinguishability score being lower than a threshold; para. [0026, 0062]), the utterances and their attributes are the operative inputs used by the classifier; selecting the one or more features of the plurality of features (i.e. pairs of intents that are difficult to differentiate based upon the given training samples (e.g., user utterances) may be identified from a plurality of intents based upon distinguishability scores (e.g., F-scores). For each of the identified pairs of intents, pairs of training samples each including a training sample associated with one intent and a training sample associated with the other intent are ranked based upon a similarity score (e.g., a Jaccard similarity score or a Levenshtein distance) between the two training samples in each pair; para. [0070, 0072]); extracting one or more values for the one or more features selected (i.e. pairs of intents that are difficult to differentiate based upon the given training samples (e.g., user utterances) may be identified from a plurality of intents based upon distinguishability scores (e.g., F-scores). For each of the identified pairs of intents, pairs of training samples each including a training sample associated with one intent and a training sample associated with the other intent are ranked based upon a similarity score (e.g., a Jaccard similarity score or a Levenshtein distance) between the two training samples in each pair. The identified pairs of intents and the corresponding pairs of training samples having the highest similarity scores may be presented to users through a user interface, along with user-selectable options for improving the training samples; para. [0070, 0073]); determining at least one correlation of the one or more features with the set of classes respectively (i.e. pairs of intents that are difficult to differentiate based upon the given training samples (e.g., user utterances) may be identified from a plurality of intents based upon distinguishability scores (e.g., F-scores). For each of the identified pairs of intents, pairs of training samples each including a training sample associated with one intent and a training sample associated with the other intent are ranked based upon a similarity score (e.g., a Jaccard similarity score or a Levenshtein distance) between the two training samples in each pair. The identified pairs of intents and the corresponding pairs of training samples having the highest similarity scores may be presented to users through a user interface, along with user-selectable options for improving the training samples; para. [0070, 0075]); generating at least one diagnostic example for the correlation (i.e. pairs of intents that are difficult to differentiate based upon the given training samples (e.g., user utterances) may be identified from a plurality of intents based upon distinguishability scores (e.g., F-scores). For each of the identified pairs of intents, pairs of training samples each including a training sample associated with one intent and a training sample associated with the other intent are ranked based upon a similarity score (e.g., a Jaccard similarity score or a Levenshtein distance) between the two training samples in each pair. The identified pairs of intents and the corresponding pairs of training samples having the highest similarity scores may be presented to users through a user interface, along with user-selectable options for improving the training samples; para. [0070]); wherein the at least one diagnostic example requires the developer to one of validate or invalidate a correctness of the correlation produced by the at least one diagnostic example (i.e. the computing system may provide, through a graphic user interface (GUI), information that may be used by the developer of the bot system to improve the intent classification. The information may include the identified pairs of intents that are determined to be difficult to distinguish, at least a portion of the corresponding pairs of training samples (e.g., pairs with the highest similarity scores) for each identified pair of intents, and user-selectable options for improving classification of utterances associated with the plurality of intents. In some embodiments, the information may be displayed on one or more GUI screens. For example, some information may be displayed on a new GUI screen when the developer selects a selectable item on a GUI screen, such as a link, a button, a user menu item, and the like. In some embodiments, the user-selectable options may include removing, adding, or modifying some training samples, and/or adding, deleting, or updating some intents; para. [0027, 0076]); and when the developer invalidates the correctness of the at least one correlation of the one or more features with the set of classes respectively produced by the at least one diagnostic example (i.e. The developer may edit (e.g., modify or remove) any utterance by clicking an icon 722. In some embodiments, the developer may also add new utterance by clicking an icon 724. After the editing, the developer may validate the modified utterances by clicking a “Validate” button 730; para. [0089]): locating existing examples in the training set having the at least one correlation of the at least one correlation of the one or more features with the set of classes respectively (i.e. At 1060, the computing system may select one or more pairs of training samples that have the highest similarity scores among the pairs of training samples. In some embodiments, the computing system may select one or more pairs of training samples having similarity scores greater than a certain threshold value; para. [0099]), and automatically modifying the existing examples in the training set having the at least one correlation in the training set to suppress the at least one correlation (i.e. the developer may also add new utterance by clicking an icon 724. After the editing, the developer may validate the modified utterances by clicking a “Validate” button 730, and/or retrain the classification model for distinguishing the pair of intents by clicking a “Train” button 740; para. [0088, 0089]), “automatically modifying” can include system-perform modification of stored training samples once the developer selects an option, the modification is executed by the computer rather than manually rewriting a dataset outside the tool, improving distinguishability by editing training samples/labels so the model stops relying on the problematic similarity pattern, thereby, “suppress the correlation”. Claim 6: Singaraju teaches the computer-implemented method as claimed in claim 1. Singaraju further teaches wherein determining the at least one correlation includes at least one of computing the at least one correlation over a set of examples or extracting the at least one correlation from the classification model (i.e. pairs of intents that are difficult to differentiate based upon the given training samples (e.g., user utterances) may be identified from a plurality of intents based upon distinguishability scores (e.g., F-scores). For each of the identified pairs of intents, pairs of training samples each including a training sample associated with one intent and a training sample associated with the other intent are ranked based upon a similarity score (e.g., a Jaccard similarity score or a Levenshtein distance) between the two training samples in each pair. The identified pairs of intents and the corresponding pairs of training samples having the highest similarity scores may be presented to users through a user interface, along with user-selectable options for improving the training samples; para. [0070, 0073, 0082, 0086]). Claim 10: Singaraju teaches the computer-implemented method as claimed in claim 1. Singaraju further teaches wherein the at least one diagnostic example includes at least one of: a text-based question, an image-based question, an audio-based question, a video-based question, or a data-based question for the developer to at least one of validate or invalidate the correctness of the correlation produced (i.e. pairs of intents that are difficult to differentiate based upon the given training samples (e.g., user utterances) may be identified from a plurality of intents based upon distinguishability scores (e.g., F-scores). For each of the identified pairs of intents, pairs of training samples each including a training sample associated with one intent and a training sample associated with the other intent are ranked based upon a similarity score (e.g., a Jaccard similarity score or a Levenshtein distance) between the two training samples in each pair. The identified pairs of intents and the corresponding pairs of training samples having the highest similarity scores may be presented to users through a user interface, along with user-selectable options for improving the training samples; para. [0070, 0076]). Claim 11: Singaraju teaches the computer-implemented method as claimed in claim 1. Singaraju further teaches wherein generating the at least one diagnostic example comprises: accessing a plurality of examples within a training set (i.e. a computing system may receive training samples for training one or more classifiers to distinguish inputs associated with a plurality of intents. The plurality of intents may be generated or identified by a developer of a bot system as described above. The training samples may include examples of end user utterances that users may communicate with the bot system. The training samples may also include the end user intents associated with the end user utterances. For example, the training samples may include annotations or labels indicating the end user intents associated with respective end user utterances; para. [0072]); extracting the one or more features for each example of the plurality of examples (i.e. for each pair of intents that is identified as difficult to distinguish, the computing system may rank pairs of training samples that each include a training sample associated with one intent and a training sample with the other intent in the pair of intents based upon a similarity score between the two training samples in each pair of training samples. The pairs of training samples may include any pair of training samples that may include a training sample associated with a first intent in the pair of intents and a training sample associated with a second intent in the pair of intents. In some embodiments, the similarity score may include a Jaccard similarity score (or Jaccard distance) or a Levenshtein distance as described in detail below; para. [0075, 0076]); and generating the at least one diagnostic example based upon, at least in part, the extracted features (i.e. the computing system may provide, through a graphic user interface (GUI), information that may be used by the developer of the bot system to improve the intent classification. The information may include the identified pairs of intents that are determined to be difficult to distinguish, at least a portion of the corresponding pairs of training samples (e.g., pairs with the highest similarity scores) for each identified pair of intents, and user-selectable options for improving classification of utterances associated with the plurality of intents; para. [0076]). Claim 12: Singaraju teaches the computer-implemented method as claimed in claim 1. Singaraju further teaches wherein each of the one or more features comprise at least one of a word, a part of a word, a phrase, a sentence, a paragraph, a combination of words, a portion of an image, a portion of an audio, a portion of a video, or a portion of data (i.e. an “utterance” may refer to any sentence a customer or end user uses to communicate with a bot system. An “intent” may refer to an action that an end user intends to take or intends the bot system to take, or a goal that the end user would like to accomplish, when communicating with the bot system using one or more utterances; para. [0028, 0076]). Claim 14: Singaraju teaches the computer-implemented method as claimed in claim 1. Singaraju further teaches comprising: receiving an input from the developer that one of validates or invalidates the correctness of the correlation in response to the at least one diagnostic example (i.e. techniques disclosed herein can be used to debug and/or optimize classification models used by a bot system to determine end user intents based upon user utterances. For example, the techniques may identify possible root causes of misclassifications by a classification model, such as identifying specific training samples that are associated with different intents but are very similar, or specific intents that may need to be better defined. Thus, a developer may only need to review and edit the identified training samples or intents. In some embodiments, only classification models associated with the updated intents or the updated training samples may be retrained. Thus, the developer can quickly verify the effectiveness of the editing for the optimization using techniques disclosed herein, without having to retrain all intent classification models for the bot system; para. [0027]); when the developer invalidates the correctness of the correlation selected, at least one of: recommending the developer provide an additional set of examples used as training data for adjusting the classification model to suppress the correlation selected between the one or more features selected and the set of classes; and receiving the additional set of examples (i.e. the information may be displayed on one or more GUI screens. For example, some information may be displayed on a new GUI screen when the developer selects a selectable item on a GUI screen, such as a link, a button, a user menu item, and the like. In some embodiments, the user-selectable options may include removing, adding, or modifying some training samples, and/or adding, deleting, or updating some intents. In some embodiments, modifying a training sample may include modifying the utterance associated with the training sample. In some embodiments, modifying a training sample may include modifying the annotation or label of the end user intent associated with the training sample. In some embodiments, adding an intent may include adding training samples associated with the intent. In some embodiments, modifying an intent may include modifying the description of the intent; para. [0076]); automatically generating an additional set of examples used as training data for adjusting the classification model to suppress the correlation selected between the one or more features selected and the set of classes; automatically generating an additional set of examples and recommending at least one of the developer revise or approve the additional set of examples such that the additional set of examples are used as training data for adjusting the classification model to suppress the correlation selected between the one or more features selected and the set of classes; or adjusting the classification model by modifying at least one parameter of the classification model (i.e. fig. 3, the information may be displayed on one or more GUI screens. For example, some information may be displayed on a new GUI screen when the developer selects a selectable item on a GUI screen, such as a link, a button, a user menu item, and the like. In some embodiments, the user-selectable options may include removing, adding, or modifying some training samples, and/or adding, deleting, or updating some intents. In some embodiments, modifying a training sample may include modifying the utterance associated with the training sample. In some embodiments, modifying a training sample may include modifying the annotation or label of the end user intent associated with the training sample. In some embodiments, adding an intent may include adding training samples associated with the intent. In some embodiments, modifying an intent may include modifying the description of the intent; para. [0070, 0075, 0076]). Claim 18: Singaraju teaches the computer-implemented method as claimed in claim 1. Singaraju further teaches comprising: adjusting the classification model (i.e. The identified pairs of intents and the corresponding pairs of training samples having the highest similarity scores may be presented to users through a user interface, along with user-selectable options for improving the training samples. The user-selectable options may include adding, deleting, or modifying some training samples, or adding, deleting, or modifying some intents. The updated training samples and end user intents may be used to retrain one or more classification models for differentiating the end user intents that are difficult to differentiate. The above-described processing may be performed recursively until no pair of end user intents may be identified as being difficult to differentiate; para. [0070]); and re-determining the at least one correlation of the one or more features selected upon adjusting the classification model (i.e. The identified pairs of intents and the corresponding pairs of training samples having the highest similarity scores may be presented to users through a user interface, along with user-selectable options for improving the training samples. The user-selectable options may include adding, deleting, or modifying some training samples, or adding, deleting, or modifying some intents. The updated training samples and end user intents may be used to retrain one or more classification models for differentiating the end user intents that are difficult to differentiate. The above-described processing may be performed recursively until no pair of end user intents may be identified as being difficult to differentiate; para. [0070]). Claim 20: Singaraju teaches the computer-implemented method as claimed in claim 1. Singaraju further teaches comprising: iteratively generating another diagnostic example for the developer for another correlation selected from the at least one correlation, wherein the another diagnostic example requires the developer to one of validate or invalidate the correctness of another correlation produced by the another diagnostic example (i.e. The identified pairs of intents and the corresponding pairs of training samples having the highest similarity scores may be presented to users through a user interface, along with user-selectable options for improving the training samples. The user-selectable options may include adding, deleting, or modifying some training samples, or adding, deleting, or modifying some intents. The updated training samples and end user intents may be used to retrain one or more classification models for differentiating the end user intents that are difficult to differentiate. The above-described processing may be performed recursively until no pair of end user intents may be identified as being difficult to differentiate; para. [0027, 0070, 0076]). Claims 22-25, 27, and 29-35 are similar in scope to Claims 1, 6, 10-12, 14, 18, 20 and are rejected under a similar rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Shamir (Pub. No. US 20170161640 A1), if it is determined that the benefit score 526 does not satisfy a predetermined threshold, the feature weight 426 may be reduced. In some instances, the feature weight 426 may be moderately reduced to reduce the feature's 226 influence on the machine learning model's 210 inferences. It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33, 216 U.S.P.Q. 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 U.S.P.Q. 275, 277 (C.C.P.A. 1968)). Any inquiry concerning this communication or earlier communications from the examiner should be directed to TAN TRAN whose telephone number is (303)297-4266. The examiner can normally be reached on Monday - Thursday - 8:00 am - 5:00 pm MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matt Ell can be reached on 571-270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TAN H TRAN/Primary Examiner, Art Unit 2141
Read full office action

Prosecution Timeline

Sep 03, 2021
Application Filed
Nov 09, 2024
Non-Final Rejection — §101, §102
Feb 14, 2025
Response Filed
Apr 08, 2025
Final Rejection — §101, §102
Jul 11, 2025
Response after Non-Final Action
Aug 08, 2025
Request for Continued Examination
Aug 15, 2025
Response after Non-Final Action
Jan 22, 2026
Non-Final Rejection — §101, §102
Apr 07, 2026
Examiner Interview Summary
Apr 07, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594668
BRAIN-LIKE DECISION-MAKING AND MOTION CONTROL SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12579420
Analog Hardware Realization of Trained Neural Networks
2y 5m to grant Granted Mar 17, 2026
Patent 12579421
Analog Hardware Realization of Trained Neural Networks
2y 5m to grant Granted Mar 17, 2026
Patent 12572850
METHOD FOR IMPLEMENTING MODEL UPDATE AND DEVICE THEREOF
2y 5m to grant Granted Mar 10, 2026
Patent 12572326
DIGITAL ASSISTANT FOR MOVING AND COPYING GRAPHICAL ELEMENTS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
60%
Grant Probability
92%
With Interview (+31.8%)
3y 6m
Median Time to Grant
High
PTA Risk
Based on 307 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month