Prosecution Insights
Last updated: April 19, 2026
Application No. 19/080,415

Deep Clustering Method, Apparatus, and System

Non-Final OA §101§103§112
Filed
Mar 14, 2025
Examiner
LE, MICHAEL
Art Unit
2163
Tech Center
2100 — Computer Architecture & Software
Assignee
Huawei Technologies Co., Ltd.
OA Round
1 (Non-Final)
66%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
88%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
568 granted / 864 resolved
+10.7% vs TC avg
Strong +22% interview lift
Without
With
+22.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
61 currently pending
Career history
925
Total Applications
across all art units

Statute-Specific Performance

§101
12.4%
-27.6% vs TC avg
§103
52.7%
+12.7% vs TC avg
§102
13.4%
-26.6% vs TC avg
§112
15.9%
-24.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 864 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Summary and Status of Claims The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This Office Action is in response to Application No. 19/080,415 filed 3/14/2025 with a preliminary amendment filed 4/7/2025. Claims 1-20 are pending. Claims 3 and 10-12 are rejected under 35 U.S.C. 112(b). Claims 12 and 18-20 are rejected under 35 U.S.C. 101. Claims 1-4, 7, 9, 10, 12-16, 18, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Koshinaka (US Patent Pub 2012/0239400) in view of Ronen et al. (“DeepDPM: Deep Clustering with an Unknown Number of Clusters”, June 2022)1. Claims 5, 11, 17, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Koshinaka (US Patent Pub 2012/0239400) in view of Ronen et al. (“DeepDPM: Deep Clustering with an Unknown Number of Clusters”, June 2022), further in view of Patterson et al. (US Patent Pub 2007/0174267). Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Koshinaka (US Patent Pub 2012/0239400) in view of Ronen et al. (“DeepDPM: Deep Clustering with an Unknown Number of Clusters”, June 2022), further in view of Aggarwal et al. (US Patent 12,130,841). Claim 6 is objected to for being directed to allowable subject matter. Priority This application is a continuation of PCT/CN2023/102730 filed 6/27/2023, which claims foreign priority to Chinese Application CN 202211122497.5 filed 9/15/2022. All certified foreign priority documents have been received on 4/9/2025 and are acknowledged. Information Disclosure Statement The information disclosure statement filed 4/14/2025 has been fully considered, initialed, and signed by the Examiner. A copy is attached to this Office action. Claim Objections Claim 7 is objected to for minor informalities. In claim 7, line 1, “each of weights” should be “each weight”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 3 and 10-12 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention. Claim 3 recites “wherein the raw data comprises … a medical image …” which is a single piece of data. However, base claim 1 recites “at least two pieces of data”. In interpretations where the raw data comprises a single medical image, it is unclear how the method of claim 1 is performed. Clarification is required. Claim 10 recites “aggregating target neurons at the first output layer … ”. However, base claim 1 recites “wherein the one or more first neurons comprise a target neuron.” In other words, regardless of whether there is one “first neuron” or a plurality of “first neurons” they can only comprise a single “target neuron.” Therefore, it is unclear how aggregation of “target neurons” (i.e., a plurality of neurons) can be achieved in claim 10. Clarification is required. Claim 12 recites similar limitations as claim 3 and is rejected for the same reasons. Claim 11 is rejected because it depends on a rejected claim. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 12 and 18-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the components of the deep clustering system can be interpreted as comprising purely software components based on the specification at para. 0185, which states “both the deep clustering apparatus and the intelligent device may be implemented by software, or may be implemented by hardware.” Accordingly, the components of the system can be interpreted as software per se. For a system to be categorized as a machine or manufacture, it must include physical components. As explained above, the broadest reasonable interpretation of the system of claims 12 and 18-20 is a system comprising purely software components. Accordingly, the system of claims 12 and 18-20 are directed to non-statutory subject matter. To expedite a complete examination of the instant application, the claims rejected under 35 U.S.C. 101 (nonstatutory) above are further rejected as set forth below in anticipation of applicant amending these claims to overcome the rejection. Note on Prior Art Rejections In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-4, 7, 9, 10, 12-16, 18, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Koshinaka (US Patent Pub 2012/0239400) in view of Ronen et al. (“DeepDPM: Deep Clustering with an Unknown Number of Clusters”, June 2022) (Ronen). In regards to claim 1, Koshinaka discloses a method comprising: providing a first clustering model having a first output layer that comprises one or more first neurons, wherein the one or more first neurons comprise a target neuron (Koshinaka at para. 0125)2; obtaining one or more first classes of raw data based on outputs from the one or more first neurons, wherein the one or more first classes comprise a first target class, and wherein the targe neuron outputs the first target class (Koshinaka at para. 0125)3; and splitting the target neuron to obtain a second clustering model when a first similarity between at least two pieces of data corresponding to the first target class meets a splitting condition (Koshinaka at para. 0199)4, wherein the second clustering model has a second output layer that comprises second neurons (Koshinaka at para. 0199), and wherein each of the second neurons outputs a second class. Koshinaka at para. 0199.5 Koshinaka does not expressly disclose that the clustering models are deep clustering models. Ronen discloses a non-parametric deep clustering model that utilizes a probability calculation (i.e., similarity) to determine whether to split or merge clusters. Ronen at pg. 5, section 4.2. Koshinaka and Ronen are analogous art because they are directed to the same field of endeavor of clustering models. At the time before the effective filing date of the instant application, it would have been obvious to one of ordinary skill in the art to modify Koshinaka by adding the features of making the clustering models deep clustering models, as disclosed by Ronen. The motivation for doing so would have been deep learning models can cluster large and high dimensional datasets better than non-deep clustering methods. Ronen at pg. 1, section 1. In regards to claim 2, Koshinaka in view of Ronen discloses the method of claim 1, wherein a number of the one or more first classes is preset. Koshinaka at para. 0125.6 In regards to claim 3, Koshinaka in view of Ronen discloses the method of claim 1, wherein the raw data comprises picture data, text data, voice data, a medical image, or video data. Koshinaka at para. 0045.7 In regards to claim 4¸ Koshinaka in view of Ronen discloses the method of claim 1, further comprising: determining probability distributions of the at least two pieces of data (Koshinaka at para. 0199)8; and obtaining, based on a second similarity between the probability distributions, the first similarity. Koshinaka at para. 0199.9 In regards to claim 7, Koshinaka in view of Ronen discloses the method of claim 1, wherein each of the weights of the second neurons is a superposition value of a weight of the target neuron and Gaussian noise. Ronen at pg. 3, section 3.10 In regards to claim 9, Koshinaka in view of Ronen discloses the method of claim 1, further comprising obtaining the second class using a trained second deep clustering model. Ronen at pg. 5, section 4.2.11 In regards to claim 10, Koshinaka in view of Ronen discloses the method of claim 1, further comprising: aggregating target neurons at the first output layer to obtain a third deep clustering model when the first similarity meets the splitting condition and when a second similarity between the data corresponding to the first target class and second data corresponding to a second target class meets an aggregation condition (Koshinaka at paras. 0199-0200)12, wherein the one or more first neurons comprise the target neurons that output the first target class and the second target class (Koshinaka at paras. 0199-200)13, wherein the third deep clustering model comprises a third output layer that comprises third neurons resulting from aggregating the target neurons, and wherein each of the third neurons outputs a third class. Koshinaka at para. 0200.14 In regards to claim 12, Koshinaka discloses a deep clustering system, comprising: an intelligent device configured to collect raw data, wherein the raw data comprises picture data, text data, voice data, a medical image, or video data (Koshinaka at paras. 0041-42)15; and a deep clustering apparatus coupled to the intelligent device (Koshinaka at para. 0041) and configured to: cluster the raw data (Koshinaka at para. 0050)16; provide a first clustering model having a first output layer that comprises one or more first neurons, wherein the one or more first neurons comprise a target neuron (Koshinaka at para. 0125)17; obtain one or more first classes of the raw data based on outputs from the one or more first neurons, wherein the one or more first classes comprises a first target class, and wherein the target neuron outputs the first target class (Koshinaka at para. 0125)18; and split the target neuron to obtain a second clustering model when a first similarity between at least two pieces of data corresponding to the first target class meets a splitting condition (Koshinaka at para. 0199)19, wherein the second clustering model has a second output layer that comprises second neurons (Koshinaka at para. 0199), and wherein each of the second neuron outputs a second class. Koshinaka at para. 0199.20 Koshinaka does not expressly disclose that the clustering models are deep clustering models. Ronen discloses a non-parametric deep clustering model that utilizes a probability calculation (i.e., similarity) to determine whether to split or merge clusters. Ronen at pg. 5, section 4.2. Koshinaka and Ronen are analogous art because they are directed to the same field of endeavor of clustering models. At the time before the effective filing date of the instant application, it would have been obvious to one of ordinary skill in the art to modify Koshinaka by adding the features of making the clustering models deep clustering models, as disclosed by Ronen. The motivation for doing so would have been deep learning models can cluster large and high dimensional datasets better than non-deep clustering methods. Ronen at pg. 1, section 1. In regards to claim 13, Koshinaka discloses a computing device, comprising: a memory configured to store instructions (Koshinaka at paras. 0021-2); and one or more processors coupled to the memory, wherein when executed by the one or more processors (Koshinaka at paras. 0021-2), the instructions cause the computing device to: provide a first clustering model having a first output layer that comprises one or more first neurons, wherein the one or more first neurons comprise a target neuron (Koshinaka at para. 0125)21; obtain one or more first classes of the raw data based on outputs from the one or more first neurons, wherein the one or more first classes comprises a first target class, and wherein the target neuron outputs the first target class (Koshinaka at para. 0125)22; and split the target neuron to obtain a second clustering model when a first similarity between at least two pieces of data corresponding to the first target class meets a splitting condition (Koshinaka at para. 0199)23, wherein the second clustering model has a second output layer that comprises second neurons (Koshinaka at para. 0199), and wherein each of the second neuron outputs a second class. Koshinaka at para. 0199.24 Koshinaka does not expressly disclose that the clustering models are deep clustering models. Ronen discloses a non-parametric deep clustering model that utilizes a probability calculation (i.e., similarity) to determine whether to split or merge clusters. Ronen at pg. 5, section 4.2. Koshinaka and Ronen are analogous art because they are directed to the same field of endeavor of clustering models. At the time before the effective filing date of the instant application, it would have been obvious to one of ordinary skill in the art to modify Koshinaka by adding the features of making the clustering models deep clustering models, as disclosed by Ronen. The motivation for doing so would have been deep learning models can cluster large and high dimensional datasets better than non-deep clustering methods. Ronen at pg. 1, section 1. Claims 14-16 are essentially the same as claims 2-4, respectively, in the form of a computing device. Therefore, they are rejected for the same reasons. Claims 18 and 19 are essentially the same as claims 2 and 4, respectively, in the form of a system. Therefore, they are rejected for the same reasons. Claims 5, 11, 17, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Koshinaka (US Patent Pub 2012/0239400) in view of Ronen et al. (“DeepDPM: Deep Clustering with an Unknown Number of Clusters”, June 2022) (Ronen), further in view of Patterson et al. (US Patent Pub 2007/0174267) (Patterson). In regards to claim 5, Koshinaka in view of Ronen discloses the method of claim 1, but does not expressly disclose wherein the splitting conditions comprises that a first Jensen-Shannon (JS) divergence is greater than a splitting threshold, and wherein the first JS divergence indicates the first similarity. It is noted that Koshinaka discloses a similarity calculation is used to determine whether to split a cluster based on the calculating being greater than a threshold. Koshinaka at para. 0199. Patterson discloses a system and method for computer aided document retrieval. The method includes determining cluster attractors for a plurality of documents, calculating a probability distribution of a term, and calculating the entropy of the probability distribution. Patterson at abstract. Patterson further discloses calculating a Jensen Shannon divergence between probability distributions to estimate the similarity. Patterson at para. 0072. Koshinaka, Ronen, and Patterson are analogous art because they are directed to the same field of endeavor of clustering. At the time before the effective filing date of the instant application, it would have been obvious to one of ordinary skill in the art to modify Koshinaka in view of Ronen by adding the features of wherein the splitting conditions comprises that a first Jensen-Shannon (JS) divergence is greater than a splitting threshold, and wherein the first JS divergence indicates the first similarity, as disclosed by Patterson. The motivation for doing so would have been because using JS divergence results in a more natural for estimating similarity. Patterson at para. 0031. In regards to claim 11, Koshinaka in view of Ronen discloses the method of claim 1, but does not expressly disclose wherein the aggregation condition comprises that a second Jensen-Shannon (JS) divergence is less than an aggregation threshold, and wherein the second JS divergence indicates the second similarity. It is noted that Koshinaka discloses a similarity calculation is used to determine whether to merge (i.e., aggregate) clusters based on the calculating being less than a threshold. Koshinaka at para. 0200. Patterson discloses a system and method for computer aided document retrieval. The method includes determining cluster attractors for a plurality of documents, calculating a probability distribution of a term, and calculating the entropy of the probability distribution. Patterson at abstract. Patterson further discloses calculating a Jensen Shannon divergence between probability distributions to estimate the similarity. Patterson at para. 0072. Koshinaka, Ronen, and Patterson are analogous art because they are directed to the same field of endeavor of clustering. At the time before the effective filing date of the instant application, it would have been obvious to one of ordinary skill in the art to modify Koshinaka in view of Ronen by adding the features of wherein the aggregation condition comprises that a second Jensen-Shannon (JS) divergence is less than an aggregation threshold, and wherein the second JS divergence indicates the second similarity, as disclosed by Patterson. The motivation for doing so would have been because using JS divergence results in a more natural for estimating similarity. Patterson at para. 0031. Claim 17 is essentially the same as claim 5 in the form of a computing device. Therefore, it is rejected for the same reasons. Claim 20 is essentially the same as claim 5 in the form of a system. Therefore, it is rejected for the same reasons. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Koshinaka (US Patent Pub 2012/0239400) in view of Ronen et al. (“DeepDPM: Deep Clustering with an Unknown Number of Clusters”, June 2022) (Ronen), further in view of Aggarwal et al. (US Patent 12,130,841) (Aggarwal). In regards to claim 8, Koshinaka in view of Ronen discloses the method of claim 1, but does not expressly disclose further comprising training the second deep clustering model by: inputting the raw data and a number of second classes into the second deep clustering model; and optimizing and updating weights of the second neurons at the second output layer in a training process to obtain a trained second deep clustering model. Aggarwal discloses a framework for dynamic clustering for events. The neural network used for clustering is trained so that it is optimized. This is done by iteratively training the neural network and modifying the model parameters associated with the clustering component. Aggarwal at col. 5, lines 59-67; col. 6, lines 1-34. The model is trained on training data including the sequence of events (i.e., raw data). Model parameters also include the number of clusters. Aggarwal at col. 19, lines 49-60. Koshinaka, Ronen, and Aggarwal are analogous art because they are directed to the same field of endeavor of clustering models. At the time before the effective filing date of the instant application, it would have been obvious to one of ordinary skill in the art to modify Koshinaka in view of Ronen by adding the features of training the second deep clustering model by inputting the raw data and a number of second classes into the second deep clustering model and optimizing and updating weights of the second neurons at the second output layer in a training process to obtain a trained second deep clustering model, as disclosed by Aggarwal. The motivation for doing so would have been to be able to more accurately perform clustering of new data based on the training data. Aggarwal at col. 6, lines 29-34. Allowable Subject Matter Claim 6 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Additional Prior Art Additional relevant prior art are listed on the attached PTO-892 form. Some examples are: Gillick et al. (US Patent 4,837,831) discloses a system and method for multi-word sound models for clustering speech. Berkner (US Patent Pub 2006/0136478) discloses a system and method for generating clusters and determining when to split them. Handley (US Patent Pub 2009/0006176) discloses a system and method for clustering print items and determining when to divide them into multiple clusters based on similarity values. Ahuja et al. (US Patent Pub 2012/0075440) discloses a system and method for entropy based image segmentation and clustering of the images based on entropy. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Examiner Michael Le whose telephone number is 571-272-7970 and fax number is 571-273-7970. The examiner can normally be reached Mon-Fri 9:30 AM – 6 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tony Mahmoudi can be reached on 571-272-4078. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL LE/Examiner, Art Unit 2163 /TONY MAHMOUDI/Supervisory Patent Examiner, Art Unit 2163 1 Provided in the information disclosure statement filed 4/14/2025. 2 A learning means and classification means (i.e., first clustering model) output a defined number of clusters (i.e., first neuron, wherein the first neuron comprises a target neuron). 3 The classification means outputs clusters (i.e., first classes) of the analyzed speech data (i.e., raw data). 4 A cluster is split (i.e., split the target neuron) to update the structure of the model (i.e., obtain a second clustering model) when the probability and degree of difference (i.e., a first similarity between at least two pieces of data) is larger than a threshold (i.e., meets a splitting condition). 5 The split clusters produce the new clusters (i.e., second neurons output second classes). 6 Number of clusters T is previously defined (i.e., number of the one or more first classes is preset). 7 The data being clustered is speech data (i.e., voice data). 8 A probability that session pairs (i.e., at least two pieces of data) belong to the same cluster and a degree of difference the appearance probabilities of the speakers for respective session pairs. 9 Based on the probabilities of the speakers for respective session pairs (i.e., probability distributions), a degree of difference is determined (i.e., the first similarity is obtained from the probability distribution). 10 Model parameters include weights and gaussian values. 11 Changing K via splits and merges is performed during training (i.e., using a trained model to obtain the second class). 12 When the splitting condition is met and the merging condition is met (i.e., aggregation condition), clusters are merged (i.e., aggregating target neurons at the first output layer …). 13 The clustering model outputs clusters (i.e., first target class and second target class). 14 Merged clusters (i.e., third neuron) outputs a merged cluster (i.e., third class). 15 Speech data (i.e., raw data comprising voice data) is received and stored to be analyzed. 16 Speech data is clustered. 17 A learning means and classification means (i.e., first clustering model) output a defined number of clusters (i.e., first neuron, wherein the first neuron comprises a target neuron). 18 The classification means outputs clusters (i.e., first classes) of the analyzed speech data (i.e., raw data). 19 A cluster is split (i.e., split the target neuron) to update the structure of the model (i.e., obtain a second clustering model) when the probability and degree of difference (i.e., a first similarity between at least two pieces of data) is larger than a threshold (i.e., meets a splitting condition). 20 The split clusters produce the new clusters (i.e., second neurons output second classes). 21 A learning means and classification means (i.e., first clustering model) output a defined number of clusters (i.e., first neuron, wherein the first neuron comprises a target neuron). 22 The classification means outputs clusters (i.e., first classes) of the analyzed speech data (i.e., raw data). 23 A cluster is split (i.e., split the target neuron) to update the structure of the model (i.e., obtain a second clustering model) when the probability and degree of difference (i.e., a first similarity between at least two pieces of data) is larger than a threshold (i.e., meets a splitting condition). 24 The split clusters produce the new clusters (i.e., second neurons output second classes).
Read full office action

Prosecution Timeline

Mar 14, 2025
Application Filed
Apr 07, 2025
Response after Non-Final Action
Jan 10, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579211
AUTOMATED SHIFTING OF WEB PAGES BETWEEN DIFFERENT USER DEVICES
2y 5m to grant Granted Mar 17, 2026
Patent 12579738
INFORMATION PRESENTING METHOD, SYSTEM THEREOF, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12579072
GRAPHICS PROCESSOR REGISTER FILE INCLUDING A LOW ENERGY PORTION AND A HIGH CAPACITY PORTION
2y 5m to grant Granted Mar 17, 2026
Patent 12573094
COMPRESSION AND DECOMPRESSION OF SUB-PRIMITIVE PRESENCE INDICATIONS FOR USE IN A RENDERING SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12558788
SYSTEM AND METHOD FOR REAL-TIME ANIMATION INTERACTIVE EDITING
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
66%
Grant Probability
88%
With Interview (+22.1%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 864 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month