Prosecution Insights
Last updated: April 19, 2026
Application No. 18/557,922

SYSTEMS AND METHODS FOR MULTI-TASK AND MULTI-SCENE UNIFIED RANKING

Final Rejection §101§103
Filed
Oct 27, 2023
Examiner
CAMPBELL, SHANNON S
Art Unit
3628
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Baidu Com Times Technology (Beijing) Co. Ltd.
OA Round
2 (Final)
31%
Grant Probability
At Risk
3-4
OA Rounds
4y 8m
To Grant
40%
With Interview

Examiner Intelligence

Grants only 31% of cases
31%
Career Allow Rate
73 granted / 238 resolved
-21.3% vs TC avg
Moderate +9% lift
Without
With
+9.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 8m
Avg Prosecution
12 currently pending
Career history
250
Total Applications
across all art units

Statute-Specific Performance

§101
23.1%
-16.9% vs TC avg
§103
48.0%
+8.0% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
14.8%
-25.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 238 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Applicant has not amended or cancelled any claims. Thus, claims 1-20 are pending and presented for examination. Response to Arguments Applicant's arguments filed September 1, 2025 have been fully considered but they are not persuasive. Applicant argues with respect to the 101 rejection that the invention cannot be performed in the human mind due the scale of the activity, and therefore does not recite a mental process. First the Examiner asserts that each of these steps constitutes mathematical operations on data (e.g., vector/matrix transformations, function application, and optimization via loss functions). These operations are the type of processes that, in their basic form, could be performed mentally or with the aid of pen and paper, regardless of whether doing so would be practically feasible for a human at scale. See MPEP § 2106.04(a)(2)(III) (“The ‘mental processes’ grouping is not limited to processes that can be practically performed by a human in the mind; rather, it includes processes that can be theoretically performed in the mind.”). The use of generic computer components (e.g., “processor,” “neural networks,” “computer-readable medium”) does not transform these mathematical operations into a practical application that improves computer technology itself. The claims merely use a computer as a tool to execute the abstract idea more quickly, which is insufficient for eligibility under Alice Corp. v. CLS Bank Int’l, 573 U.S. 208 (2014). Further Applicant argues that the Office Action fails to address Step 2A, Prong Two of the §101 analysis, including whether the claims, taken as a whole, integrate the abstract idea into a practical application that improves computer functioning or another technology/technical field (citing MPEP 2106.04(d)(1) and 2106.05(a)). Step 2A, Prong Two requires evaluating the claim as a whole, not focusing on a single limitation. The specification’s “Main Experiments” (¶¶65–71, Table 2) show improved performance (higher metrics), attributed to independent embeddings and a new loss function emphasizing CVR; the overall MTMS model performs significantly better than baselines. Because the claims are computer-implemented, these improved results constitute a practical application that improves computer functioning; additionally, claims addressing user transactions (e.g., claims 3, 12, 16) show improvement to another technology/technical field (e.g., conversion in advertising/e-commerce). However, the Examiner disagrees. The Office Action considered Step 2A, Prong Two in item #4 and #5 of the Office Action, see language “when viewed in combination” . Taken as a whole, the claims do not integrate the abstract idea (mathematical/mental data-processing for ranking and recommendation) into a practical application that improves computer functioning or another technology/technical field. The claims recite generic ML operations (e.g., generating embeddings, combining vectors, computing predictions, training with a loss function) executed on generic computers. They do not claim a specific improvement to computer architecture, memory, processing, data structures, I/O, or a technological process. Applicant’s “Main Experiments” (¶¶65–71; Table 2) show improved predictive accuracy and business metrics (e.g., conversion), but improvements to algorithmic results or commercial outcomes are not improvements to computer functionality or a technical field under Step 2A, Prong Two. The analysis evaluates each claim as a whole; it is not based on isolating a single limitation. Even in combination, the recited elements implement known data-processing steps without changing how the computer itself operates. Applicant contends that the dependent claims did not receive full analysis. This is not correct. The Office Action explains that claims 2–7, 9–13, and 15–20 merely add limitations that further narrow the abstract idea recited in the independent claims and do not include additional elements indicative of integration into a practical application or that amount to significantly more under Step 2A, Prong Two. Accordingly, because no additional elements beyond the abstract idea are recited, Step 2A, Prong Two is not satisfied and the § 101 rejection is maintained. Applicant argues with respect to the rejection under 103 that Gharibshah et al cannot be prior since the publication is of surveys. Examiner asserts that although Gharibshah is a survey article, the cited portions contain specific, technical descriptions (embeddings, per-field embedding generation, combining embeddings across scenarios, cross-scene ranking neural networks, prediction outputs, etc.) that map to the steps recited in the claims. A publication in a journal can be used as prior art to reject a patent application if the publication was publicly available before the patent application's effective filing date; the publication only needs to be publicly available; it doesn't matter if it was widely read or even in a foreign language, see MPEP 2128. In response to Applicant’s argument that the tasks of Gharibshah are different than the multi-tasks in the instant application, under the broadest reasonable interpretation consistent with your specification, “task” in the claimed training method refers to human user actions/events (e.g., impression→click, click→conversion) and the “results associated with multiple tasks” are the logged outcomes of those actions. In machine-learning practice, those same human actions become the supervised labels for distinct prediction objectives. So the term “prediction task” used in Gharibshah is not a different concept—it is the modeling counterpart of your human “task.” The prediction task exists precisely because the human task (click, conversion, engagement, rating) generated the labels.The claims do not require that “tasks” be performed by a human during training; rather, they require a computer-implemented method that “receiv[es] a training dataset … [with] results associated with multiple tasks” and trains a ranking model in an MTMS setting. Gharibshah describes training datasets populated by user events and labels (clicks, conversions, ratings) and then defines multiple supervised prediction tasks over those labels. Applicant contends that “nowhere in these sentences, and nowhere in the cited section 3, does the undersigned find any hint or suggestion that the manner in which embeddings are generated is “independently for each task”, again keeping in mind that in this context, “task” means a task carried out by a human user.” Under the broadest reasonable interpretation, “independently for each task” does not require separate embedding tables per task; it can be satisfied by using multiple neural networks that produce task-specific feature representations from per-field inputs, even if an initial embedding layer is shared. Gharibshah describes exactly that in Section 3. In response to Applicant arguing that Gharibshah does not teach the combining step, when embeddings from different fields/types/scenes are combined (concatenation/pooling/attention) into a single condensed vector, that is a combination “across multiple scenarios” under the claim’s broad language. The Examiner cited Section 3.2.2’s “compound embedding layer” as teaching this aggregation. Applicant further argues the second generating step is not taught. For a 103 rejection, the reference need not have originated the method; it needs only to teach or suggest the elements. Whether Gharibshah is a survey is irrelevant to whether it can be used as prior art; what matters is whether its disclosure describes the claimed step or renders it obvious in combination with other art. Gharibshah does more than simply list citations: it describes the architecture of typical user-response prediction models (Figure 2, Section 3). It explains that embeddings are fed into prediction networks that output probabilities for user events (click, conversion, engagement, ratings). It catalogs multi-task frameworks (ESMM, PFD+MD, etc.) that are trained to output multiple predictions from the same combined embedding — exactly the kind of “multi-scene task predictions” in the claim. It identifies different scenarios (search, display, streaming, etc.) and different tasks (CTR, CVR, ranking, product rate) and shows that models can output predictions for each. Regarding the argument for the obtaining step, Examiner asserts the claim is satisfied by Gharibshah’s disclosure of Multi-field, multi-scenario data (Section 3.1, 3.2, Figure 2), Embedding and modeling pipelines that process this data, Multiple output predictions: CTR (click-through rate), CVR (conversion rate), engagement, etc. (see Table 4 and Sections 2.2, 4.3), and Multi-task learning frameworks (e.g., ESMM, PFD+MD) that jointly predict several user actions/events across different scenarios/platforms. Regarding the argument for the training step, these limitations are explicitly taught by Yan et al at col 13 lines 9–13 and col 13 lines 13–26. Gharibshah introduces the concept of minimizing prediction error so it makes sense to supplement the minimization of prediction error of Gharibshah with Yan. Applicant argues with respect the rejection of claim 8 that the Examiner has not provided any motivation to combine. Respectfully, the Examiner points the Applicant to the bottom of page 9 in the Non-final rejected dated 06/06/25 for the motivation. In response to applicant's argument that Wu is nonanalogous art, it has been held that a prior art reference must either be in the field of the inventor’s endeavor or, if not, then be reasonably pertinent to the particular problem with which the inventor was concerned, in order to be relied upon as a basis for rejection of the claimed invention. See In re Oetiker, 977 F.2d 1443, 24 USPQ2d 1443 (Fed. Cir. 1992). In this case, Wu is directed to machine learning using embeddings similar to that of Gharibshah. Regarding the rejection of claim 8, Applicant indicates that Wu does not teach the limitation. Even if Wu does not literally use the words “stop” or “condition” in paragraph 24, model training via iterative optimization inherently includes a halting criterion under the broadest reasonable interpretation: convergence threshold, maximum iterations/epochs, validation-based early stopping, or preset time/compute budget. A POSITA would understand that “updating parameters” in training proceeds until one of these standard criteria is satisfied. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claims 1 and 14 recite receiving a training dataset across multiple scenarios, the training dataset comprises input data in multiple fields across multiple scenarios and results associated with multiple tasks; generating embeddings independently for input data in each field for each task under each scenario; combining embeddings across the multiple scenarios to generate a combined embedding; generating multi-scene task predictions for the multiple tasks under the multiple scenarios; receiv[ing] the combined embedding to generate a multi-scene task prediction for one task under one scenario; obtaining an MTMS prediction based at least on each multi-scene task prediction; and training the model using an MTMS loss function, the MTMS loss function comprises at least loss terms associated with each task. The limitations above are processes that under the broadest reasonable interpretation fall into the “mental processes” grouping of abstract ideas –“concepts performed in the human mind by observation, evaluation, judgement, and opinion”, see MPEP 2106.04 (a)(2) (III). Claim 8 recites initializing embeddings of each feature field for each task under each scenario in a multi-task and multi-scene (MTMS) setting; updating, until a stop condition is met, parameters of multiple neural networks within the ranking model with a training dataset to update embeddings across the multiple scenarios, the training dataset comprises input data in multiple fields under multiple scenarios and results associated with multiple tasks for each scenario; combining the updated embeddings across multiple tasks and across the multiple scenarios to generate a combined embedding for each task under each scenario; generating multi-scene task predictions, receive[ing] one combined embedding for one task under one scenario to generate a multi- scene task prediction for the one task; and training the ranking model using an MTMS loss function, the MTMS loss function comprises at least loss terms associated with each task. The limitations above are processes that under the broadest reasonable interpretation fall into the “mental processes” grouping of abstract ideas –“concepts performed in the human mind by observation, evaluation, judgement, and opinion”, see MPEP 2106.04 (a)(2) (III). This judicial exception is not integrated into a practical application because the additional elements are not indicative of integration into a practical application since they are recited at a high level of generality. Additionally, while the limitations are “a computer-implemented method”(claims 1 and 8) and using “a non-transitory computer-readable medium or media comprising one or more sequences of instructions which, when executed by at least one processor, causes” (claim 14). The computer is merely used as a tool to practice the judicial exception. The limitations “at the ranking model” (claims 1 and 14), “using multiple neural networks within the ranking model” (claims 1 and 14), “using multiple cross-scene ranking neural networks within the ranking model” (claims 1, 8 and 14), and “each cross-scene ranking neural network” (claims 1 and 14) provide nothing more than mere instructions to implement an abstract idea on a generic computer. These additional elements also serve to provide a general link to a technical environment in which to carry out the judicial exception. Even when viewed in combination, these additional elements are still mere instructions to implement the judicial exception and a general link to a technological field (neural networks). The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception for the same reasons as presented above. The claims as a whole describe how to apply the concept of training a model using a loss model for unified ranking using “a neural network”, “ranking model”, processors”, and “instructions” using generic computer computers to apply the abstract idea on a computer. Moreover, the additional elements of are known and conventional computing elements as evidenced by the spec at para 0073-0077---describing these elements at a high level of generality. Even when considered in combination, these additional elements represent mere instructions to implement an abstract idea or other exception on a computer and general tie to a technical field, which do not provide an inventive concept. Therefore, the claims are ineligible. Dependent claims 2-7, 9-13, and 15-20 recite additional details that further narrow the previously recited abstract idea. There are no additional elements that are indicative of integration into a practical application; nor are there additional elements that amount to significantly more that the judicial exception. Thus, even when viewed as a whole, nothing in the claims adds significantly more to the abstract idea. Therefore, the claims are ineligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-7 and 14-20 are rejected under 35 U.S.C. 103 as being unpatentable over “User Response Prediction in Online Advertising”, May 2021 by Gharibshah et al in view of Yan et al (US 11,842,738). As per claims 1 and 14, Gharibshah et al discloses a computer-implemented method to train a ranking model for information recommendation in a multi-task and multi-scene (MTMS) setting comprising: receiving, at the ranking model, a training dataset across multiple scenarios, the training dataset comprises input data in multiple fields across multiple scenarios and results associated with multiple tasks (64:3, scalability; 64:8, section 3.1, fields including Gender, City, Age, ID; 64:15; section 4.3, para 2.; input dataset; 64:9, section 3.2, data logs from many online data provider services) ; generating, using multiple neural networks within the ranking model, embeddings independently for input data in each field for each task under each scenario (64: 7-64:8, section 3, after pre-processing…data samples are described with a series of features (fields)….that are normally specified as binary user response value such as 1 for click, conversion, purchasing, and so on, and 0 otherwise); combining embeddings across the multiple scenarios to generate a combined embedding (64:10; section 3.2.2, the combination of different features in modeling lead to various compound embedding layer for input data to generate condensed feature representation); generating, using multiple cross-scene ranking neural networks within the ranking model, multi-scene task predictions for the multiple tasks under the multiple scenarios, each cross-scene ranking neural network receives the combined embedding to generate a multi-scene task prediction for one task under one scenario (64:7; section 3, the prediction task, it will output probability of users making an interaction (e.g. click) on items in the list; 64:12, predicting user response is defined as: given a pair of webpage j and ad k, the probability of response like a mouse click…; 64:24, Table 4); obtaining an MTMS prediction based at least on each multi-scene task prediction (64:7; section 3, the prediction task, it will output probability of users making an interaction (e.g. click) on items in the list). Gharibshah et al does not disclose, however Yan discloses training the ranking model using an MTMS loss function, the MTMS loss function comprises at least loss terms associated with each task (col 13, lines 9-26). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine the training using loss function of Yan with the response prediction method of Gharibshah et al to minimize errors in predictions. As per claim 2, Gharibshah et al further discloses the computer-implemented method of claim 1 wherein the multiple fields comprise a user field and an item field (64:24, Table 4). As per claims 3 and 16, Gharibshah et al further discloses the computer-implemented method of claim 1, wherein the multiple tasks comprise a first task promoting users to respond to recommended information and a second task promoting users to have a transaction corresponding to the recommended information (64:24, Table 4). As per claims 4 and 17, Gharibshah et al further disclose the computer-implemented method of claim 1, wherein the MTMS prediction is a joint prediction as a product of each multi-scene task prediction (64:24, Table 4). As per claims 5 and 18, Gharibshah et al disclose all the limitations of claims 4 and 17. Gharibshah et al does not disclose , however Yan does disclose wherein the MTMS loss function further comprises a loss term associated with the joint prediction (col 13, lines 9-26). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine the training using loss function of Yan with the response prediction method of Gharibshah et al to minimize errors in predictions. As per claims 6 and 19, Gharibshah et al discloses all the limitations of claims 5 and 18 . Gharibshah et al does not disclose, however Yan et al discloses the computer-implemented method of claim 5, wherein the loss terms associated with each task and the loss term associated with the joint prediction have the same weight in the MTMS loss function (col 11,lines 49-55). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine the training using loss function as having the same weight of Yan et al with the response prediction method of Gharibshah et al to minimize errors in predictions. As per claims 7 and 20, Gharibshah et al further discloses the computer-implemented method of claim 1, wherein the multiple scenarios comprise two or more scenarios selected from a group of scenarios comprising news feed, video ranking, new ranking, recommendation ranking, and search engine (64: 5-64:6, section 2.2.1-2.2.3). As per claim 15, Gharibshah et al further discloses the non-transitory computer-readable medium or media of claim 14 wherein the multiple combined embeddings are the same (64:10; section 3.2.2, the combination of different features in modeling lead to various compound embedding layer for input data to generate condensed feature representation). Claims 8-13 are rejected under 35 U.S.C. 103 as being unpatentable over “User Response Prediction in Online Advertising”, May 2021 by Gharibshah et al in view of Yan et al (US 11,842,738) and Wu et al (US 2022/0253722). As per claim 8, Gharibshah et al discloses a computer-implemented method for training a ranking model comprising: embeddings of each feature field for each task under each scenario in a multi-task and multi-scene (MTMS) setting (64: 7-64:8, section 3, after pre-processing…data samples are described with a series of features (fields)….that are normally specified as binary user response value such as 1 for click, conversion, purchasing, and so on, and 0 otherwise); ; updating, parameters of multiple neural networks within the ranking model with a training dataset to update embeddings across the multiple scenarios, the training dataset comprises input data in multiple fields under multiple scenarios and results associated with multiple tasks for each scenario (64:3, scalability; 64:8, section 3.1, fields including Gender, City, Age, ID; 64:15; section 4.3, para 2.; input dataset; 64:9, section 3.2, data logs from many online data provider services); combining the updated embeddings across multiple tasks and across the multiple scenarios to generate a combined embedding for each task under each scenario (64:10; section 3.2.2, the combination of different features in modeling lead to various compound embedding layer for input data to generate condensed feature representation); and generating, multiple cross-scene ranking neural networks within the ranking model, multi-scene task predictions, each cross-scene ranking neural network receives one combined embedding for one task under one scenario to generate a multi- scene task prediction for the one task (64:7; section 3, the prediction task, it will output probability of users making an interaction (e.g. click) on items in the list; 64:12, predicting user response is defined as: given a pair of webpage j and ad k, the probability of response like a mouse click…; 64:24, Table 4). Gharibshah et al does not disclose, however, Yan et al discloses training the ranking model using an MTMS loss function, the MTMS loss function comprises at least loss terms associated with each task (col 13, lines 9-26). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine the training using loss function of Yan with the response prediction method of Gharibshah et al to minimize errors in predictions. Gharibshah et al in view of Yan et al does not disclose, however Wu et al discloses initializing embeddings (0065); and updating parameters until a stop condition is met (0024). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine the teachings of Wu et al with the response prediction method of Gharibshah et al in view of Yan et al to minimize errors in predictions. As per claim 9, Gharibshah et al further discloses the computer-implemented method of claim 8 wherein the embeddings of each feature field for each task under each scenario are updated separately and not shared across tasks during embedding updating (64: 7-64:8, section 3, after pre-processing…data samples are described with a series of features (fields)….that are normally specified as binary user response value such as 1 for click, conversion, purchasing, and so on, and 0 otherwise). As per claim 10, Gharibshah et al in view of Yan el and Wu et al disclose the computer-implemented method of claim 8. Wu et al further discloses wherein the stop condition is a predetermined number of updating iteration, all training data being used, the multiple neural networks being converged, or a loss being less than a predetermined threshold (0024). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine the teachings of Wu et al with the response prediction method of Gharibshah et al in view of Yan et al to minimize errors in predictions. As per claim 11, Gharibshah et al further discloses the computer-implemented method of claim 8, wherein the multiple fields comprise a user field and an item field (64:24, table 4). As per claim 12, Gharibshah et al further discloses the computer-implemented method of claim 8, wherein the multiple tasks comprise a first task promoting users to respond to recommended information and a second task promoting users to have a transaction corresponding to the recommended information (64:24, table 4). As per claim 13, Gharibshah et al discloses the computer-implemented method of claim 12. Gharibshah et al does not disclose, however, Yan et al discloses wherein the MTMS loss function further comprises a loss term associated with a joint prediction, wherein the joint prediction is a product of each multi-scene task prediction (col 13, lines 9-26). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to combine the training using loss function of Yan with the response prediction method of Gharibshah et al to minimize errors in predictions. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHANNON S CAMPBELL whose telephone number is (571)272-5587. The examiner can normally be reached Monday - Friday 7am-3:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHANNON S CAMPBELL/Supervisory Patent Examiner, Art Unit 3628
Read full office action

Prosecution Timeline

Oct 27, 2023
Application Filed
May 29, 2025
Non-Final Rejection — §101, §103
Sep 01, 2025
Response Filed
Nov 20, 2025
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 9663667
Electroless silvering ink
2y 5m to grant Granted May 30, 2017
Patent 8880417
SYSTEM AND METHOD FOR ENSURING ACCURATE REIMBURSEMENT FOR TRAVEL EXPENSES
2y 5m to grant Granted Nov 04, 2014
Patent 8856017
BOOKING METHOD AND SYSTEM
2y 5m to grant Granted Oct 07, 2014
Patent 8843384
METHOD FOR SELECTING A SPATIAL ALLOCATION
2y 5m to grant Granted Sep 23, 2014
Patent 8775222
System and Method for Improved Rental Vehicle Reservation Management
2y 5m to grant Granted Jul 08, 2014
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
31%
Grant Probability
40%
With Interview (+9.2%)
4y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 238 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month