Prosecution Insights
Last updated: April 19, 2026
Application No. 19/075,941

Artificial Intelligence Deliberative Assembly (AIDA)

Non-Final OA §102§103
Filed
Mar 11, 2025
Examiner
BOSTWICK, SIDNEY VINCENT
Art Unit
2124
Tech Center
2100 — Computer Architecture & Software
Assignee
Aqfer, Inc.
OA Round
3 (Non-Final)
52%
Grant Probability
Moderate
3-4
OA Rounds
4y 7m
To Grant
90%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
71 granted / 136 resolved
-2.8% vs TC avg
Strong +38% interview lift
Without
With
+38.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 7m
Avg Prosecution
68 currently pending
Career history
204
Total Applications
across all art units

Statute-Specific Performance

§101
24.4%
-15.6% vs TC avg
§103
40.9%
+0.9% vs TC avg
§102
12.0%
-28.0% vs TC avg
§112
21.9%
-18.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 136 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2/5/2026 has been entered. Remarks This Office Action is responsive to Applicants' Amendment filed on February 5, 2026, in which claims 1, 12, 19 and 20 are currently amended. Claims 1-20 are currently pending. Response to Arguments The rejections to claims 1-20 under 35 U.S.C. § 101 are hereby withdrawn, as necessitated by applicant's amendments and remarks made to the rejections. Applicant’s arguments with respect to rejection of claims 1-20 under 35 U.S.C. 102/103 based on amendment have been considered and are persuasive. The argument is moot in view of a new ground of rejection set forth below. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-6, 9, and 11 are rejected under U.S.C. §102(a)(1) as being anticipated by Du (“IMPROVING FACTUALITY AND REASONING IN LANGUAGE MODELS THROUGH MULTIAGENT DEBATE”, 2023) Regarding claim 1, Du teaches A method of generative Artificial Intelligence (AI), comprising: associating a set of machine learning (ML) models together, the set of machine learning models including a first model; ([p. 3] "we propose to generate answers subject to a multi-agent debate procedure between multiple instances of large language models. Given a question, multiple agents represented as copies of a large language model, generate answers to the question. Each response serves as a possible thought process or source of information which agents may re-examine to find consistent final answers") responsive to receipt of a prompt at the first model, generating a first response; ([p. 3] "Given a question, multiple agents represented as copies of a large language model, generate answers to the question" Question interpreted as synonymous with prompt. Answers interpreted as responses.) executing a deliberation among the set of ML models with respect to the first response generated by the first model([p. 3] "After initial responses are generated from different agents, we initiate a round of debate between agents." round of debate interpreted as synonymous with a deliberation) wherein executing the deliberation includes, in at least a first iteration, each given model of the set of ML models other than the first model receiving an input from a previous model and generating an output([p. 3] "Individual responses from other agents are concatenated and given as context to each agent, with each agent instructed to construct a new response based on such responses" Du explicitly states that the responses are concatenated from other agents responses to generate an output (a new response)) determining, based on the deliberation whether a consensus among the set of ML models has been reached; wherein the consensus is that each of the set of ML models concurs with a final response;([p. 4 §2.2] "we find that language models are able to converge on a single shared answer after multiple rounds of debate (Figure 4)" Converging on a single shared answer interpreted as reaching a consensus among the set of ML models. The single shared answer interpreted as the final response.) and responsive to determining that a consensus among the set of ML models has been reached, returning the final response to the prompt([p. 4 §2.2] "we find that language models are able to converge on a single shared answer after multiple rounds of debate (Figure 4)" [p. 4] "we took the majority answer across agents at the end of debate" Convergence on a single shared answer interpreted as consensus. Cannot take a majority answer if there is no consensus.) wherein the output from each given model has a same structured data object format, at least in part ensuring that the deliberation continues without disruption until the consensus has been reached([Abstract] "Our approach may be directly applied to existing black-box models and uses identical procedure and prompts for all tasks we investigate." [p. 8] "we use the same prompts for all agents." [p. 4] "This resultant consensus prompt may then be repeatedly given, using the updated responses of each agent" See also Appendix where the agents outputs are all a same structured text format. Du explicitly states that the deliberation continues repeatedly and without disruption until convergence.). Regarding claim 2, Du teaches The method as described in claim 1 wherein the ML models are large language models (LLMs)(Du [p. 1] "multiple instances of a language model first generate individual candidate answers to a query"). Regarding claim 3, Du teaches The method as described in claim 1 wherein the set of ML models comprises at least first and second large language models (LLMs) that differ from one another(Du [p. 9] "We further assess the impact of using two different language models, where we ask chatGPT and Bard (Pichai, 2023) language models to debate with each other" See also FIG. 13). Regarding claim 4, Du teaches The method as described in claim 1 wherein the set of ML models comprises at least first and second instances of a same large language model (LLM)(Du [p. 3] "multiple instances of large language models. Given a question, multiple agents represented as copies of a large language model, generate answers to the question."). Regarding claim 5, Du teaches The method as described in claim 1 wherein the set of ML models are associated together in a loop, and wherein the deliberation comprises a given model receiving an input from a previous model in the loop, and wherein the given model generates an output that is then provided to a next model in the loop(Du [p. 3] "Individual responses from other agents are concatenated and given as context to each agent, with each agent instructed to construct a new response based on such responses […] We iteratively repeat this debate procedure over multiple rounds"). Regarding claim 6, Du teaches The method as described in claim 1 wherein the set of ML models are associated together in a sequence(Du [p. 3] "Individual responses from other agents are concatenated and given as context to each agent, with each agent instructed to construct a new response based on such responses […] We iteratively repeat this debate procedure over multiple rounds" iterative repetitions interpreted as sequence. Du also explicitly states that other model responses are compiled such that the model prediction is explicitly sequenced.). Regarding claim 9, Du teaches The method as described in claim 1, further including specifying the set of ML models in a template(Du [p. 2] "We use the same methodology and prompt templates for all our tasks" Same methodology in Du interpreted as template specifying the set of ML models). Regarding claim 11, Du teaches The method as described in claim 1, further including generating a change log associated with the consensus, the change log identifying a change to at least one given response(Du [p. 25] "can you provide an updated answer […] the updated answer is -122"). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 7 and 8 are rejected under U.S.C. §103 as being unpatentable over the combination of Du and Shukla ("Generative AI Approach to Distributed Summarization of Financial Narratives", 2023). Regarding claim 7, Du teaches The method as described in claim 6. However, Du doesn't explicitly teach further including associating the ML models with a Retrieval Augmented Generation (RAG) retriever. Shukla, in the same field of endeavor, teaches associating the ML models with a Retrieval Augmented Generation (RAG) retriever ([p. 2875 §IV] "The few-shot examples were generated using the Retrieval-Augmented Generation (RAG) technique from a annotated training dataset created by DiMSum [...] We submitted three systems for English documents: 1) DiMSum, 2) DiMSum-GenAI-1: In this system, narrative sections were classified using the DiMSum-based Logistic Regression classifier, and summarization was performed by the LLM. 3) DiMSum-GenAI-2: In this system, narrative sections were classified using the LLM, and summarization was also executed by the LLM"). Du as well as Shukla are directed towards LLM ensembles. Therefore, Du as well as Shukla are analogous art in the same field of endeavor. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of Du with the teachings of Shukla by gathering input using Retrieval Augmented Generation. Shukla provides as additional motivation for combination that Retrieval Augmented Generation (RAG) is used ([p. 2875 §IVC] "to enhance the model’s reliability and factual consistency by incorporating supplementary knowledge. RAG operates by taking an input and retrieving a collection of pertinent examples, which are then concatenated as context to the original prompt"). Regarding claim 8, the combination of Du, and Shukla teaches The method as described in claim 7, wherein the deliberation comprises a given model receiving an input from a previous model in the loop, and wherein the given model generates an output that is then provided to a next model in the loop, (Du [p. 3] "Individual responses from other agents are concatenated and given as context to each agent, with each agent instructed to construct a new response based on such responses" [p. 2] "The method is also orthogonal to other model generation improvements such as retrieval") and wherein the input also includes a context provided by the RAG retriever(Shukla [p. 2875 §IV] "The few-shot examples were generated using the Retrieval-Augmented Generation (RAG) technique from a annotated training dataset created by DiMSum [...] We submitted three systems for English documents: 1) DiMSum, 2) DiMSum-GenAI-1: In this system, narrative sections were classified using the DiMSum-based Logistic Regression classifier, and summarization was performed by the LLM. 3) DiMSum-GenAI-2: In this system, narrative sections were classified using the LLM, and summarization was also executed by the LLM"). Claim 10 is rejected under U.S.C. §103 as being unpatentable over the combination of Du and Andoni (US20190130277A1). Regarding claim 10, Du teaches The method as described in claim 9. However, Du doesn't explicitly teach wherein the template defines an array that specifies the ML models, an order of the ML models, and one or more examples. Andoni, in the same field of endeavor, teaches the template defines an array that specifies the ML models, an order of the ML models, and one or more examples ([¶0057] "The ensembler 172 may be configured to receive an input 510 and to generate an ensembler output 514 based on the input 510. To illustrate, the ensembler 172 may be configured to provide at least a portion of the input 510 to each model of the array of models 502 (e.g., the subset of models that are aggregated to form the ensembler 172) to generate a plurality of intermediate outputs 512. For example, the input 510 may include a data set, and each model (e.g., neural network) of the array of models 502 may receive the data set and generate a result (e.g., a classification) by passing the input set through the corresponding neural network. In this example, the classifications (or other results) generated by the array of models 502 correspond to the plurality of intermediate outputs 512." See FIG. 5 which shows a template defining an ordered array of models 502 and one or more examples.). Du as well as Andoni are directed towards machine learning ensembles. Therefore, Du as well as Andoni are reasonably pertinent analogous art. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of Du with the teachings of Andoni by having the plurality of models used in the ensemble in an ordered array in a template. While it would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that a neural network ensemble could be a configurable template where the plurality of models is represented as an array, this is explicitly reinforced by Andoni in FIG. 5. Andoni also provides as additional motivation for combination ([¶0042] "the user may provide input to constrain allowed model topologies. To illustrate, the model may be constrained to include no more than a specified number of input nodes, no more than a specified number of hidden layers, or no recurrent loops"). Claims 12-14 and 18 are rejected under U.S.C. §103 as being unpatentable over the combination of Du and Modi (US20210357835A1). Regarding claim 12, Du teaches The method as described in claim 1. However, Du doesn't explicitly teach a hardware processor; and computing memory holding computer program instructions executed by the hardware processor, the computer program instructions configured to provide inferencing by A Software-as-a-Service (SaaS) computing platform, comprising:. Modi, in the same field of endeavor, teaches a hardware processor; and computing memory holding computer program instructions executed by the hardware processor, the computer program instructions configured to provide inferencing by([¶0027] "System 210 may include memory 214 for storing information and instructions for execution by processor 222. Memory 214 may contain various components for retrieving, presenting, modifying, and storing data. For example, memory 214 may store software modules that provide functionality when executed by processor 222. The modules may include an operating system 215 that provides operating system functionality for system 210. The modules can include an operating system 215, prediction engine 216 configured to predict resource deployment parameters, as well as other applications modules 218") A Software-as-a-Service (SaaS) computing platform, comprising: ([¶0015] "Embodiments generate resource deployment predictions using an ensemble machine learning model. For example, enterprises can deploy a mix of resources in a variety of circumstances, such as software as a service (“SaaS”) tools"). Du as well as Modi are directed towards machine learning model ensembles. Therefore, Du as well as Modi are analogous art in the same field of endeavor. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of Du with the teachings of Modi by using the ensemble as part of a Software-as-a-Service system. Modi provides as additional motivation for combination ([¶0019] "parameters for how a SaaS tool can be used with existing systems can be generated. Embodiments predict resource deployment parameters using the ensemble machine learning model with a likelihood of success, thus reducing the effort required to deploy enterprise resources."). Regarding claim 13, the combination of Du, and Modi teaches The SaaS computing platform as described in claim 12, wherein the ML models are large language models (LLMs)(Du [p. 1] "multiple instances of a language model first generate individual candidate answers to a query"). Regarding claim 14, the combination of Du, and Modi teaches The SaaS computing platform as described in claim 12, wherein the set of ML models comprises at least first and second large language models that are one of: a same language model, or different language models(Du [p. 9] "We further assess the impact of using two different language models, where we ask chatGPT and Bard (Pichai, 2023) language models to debate with each other" See also FIG. 13). Regarding claim 18, the combination of Du, and Modi teaches The SaaS computing platform as described in claim 12, wherein the computer program instructions configured to provide inferencing further include program code configured to provide one or more templates configured to receive input that configures the deliberation(Du [p. 2] "We use the same methodology and prompt templates for all our tasks" Same methodology in Du interpreted as template configuring the deliberation). Claims 15 and 16 are rejected under U.S.C. §103 as being unpatentable over the combination of Du and Modi and Gan ("Model-as-a-Service (MaaS): A Survey", 2023). Regarding claim 15, the combination of Du, and Modi teaches The SaaS computing platform as described in claim 12. However, the combination of Du, and Modi doesn't explicitly teach, further includes one or more Application Programming Interfaces (APIs). Gan, in the same field of endeavor, teaches The SaaS computing platform as described in claim 12, further includes one or more Application Programming Interfaces (APIs)([p. 1] "MaaS allows users to leverage pre-trained ML models and algorithms through simple interfaces, application programming interfaces (APIs), or software development kits (SDKs). MaaS allows users to access the functions of the large model through a calling API without the need to train and maintain complex models themselves"). The combination of Du and Modi as well as Gan are directed towards LLM ensembles. Therefore, the combination of Du and Modi as well as Gan are analogous art in the same field of endeavor. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of the combination of Du and Modi with the teachings of Gan by using an API to interface with the LLM ensemble. Gan provides as additional motivation for combination ([p. 4] "API and development tools are key technologies that make sure MaaS services are easy to use. MaaS platforms provide a set of APIs for developers to dumb down the use and integration of models. These APIs provide endpoints for making requests and receiving predictions or insights from the models"). This motivation for combination also applies to the remaining claims which depend on this combination. Regarding claim 16, the combination of Du, Modi, and Gan teaches The SaaS computing platform as described in claim 15, wherein the set of ML models are associated together in one of: a loop, and a sequence,(Du [p. 3] "Individual responses from other agents are concatenated and given as context to each agent, with each agent instructed to construct a new response based on such responses […] We iteratively repeat this debate procedure over multiple rounds" iterative repetitions interpreted as sequence. Du also explicitly states that other model responses are compiled such that the model prediction is explicitly sequenced.) and wherein the deliberation comprises using the one or more APIs to interface to the set of ML models(Gan [p. 1] "MaaS allows users to leverage pre-trained ML models and algorithms through simple interfaces, application programming interfaces (APIs), or software development kits (SDKs). MaaS allows users to access the functions of the large model through a calling API without the need to train and maintain complex models themselves"). Claim 17 is rejected under U.S.C. §103 as being unpatentable over the combination of Du and Modi and Gan and Shukla. Regarding claim 17, the combination of Du, Modi, and Gan teaches The SaaS computing platform as described in claim 15. However, the combination of Du, Modi, and Gan doesn't explicitly teach further including associating at least one of the ML models with a Retrieval Augmented Generation (RAG) retriever. Shukla, in the same field of endeavor, teaches associating at least one of the ML models with a Retrieval Augmented Generation (RAG) retriever ([p. 2875 §IV] "The few-shot examples were generated using the Retrieval-Augmented Generation (RAG) technique from a annotated training dataset created by DiMSum [...] We submitted three systems for English documents: 1) DiMSum, 2) DiMSum-GenAI-1: In this system, narrative sections were classified using the DiMSum-based Logistic Regression classifier, and summarization was performed by the LLM. 3) DiMSum-GenAI-2: In this system, narrative sections were classified using the LLM, and summarization was also executed by the LLM"). The combination of Wu, Modi, and Ganas well as Shukla are directed towards LLM ensembles. Therefore, the combination of Wu, Modi, and Ganas well as Shukla are analogous art in the same field of endeavor. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of the combination of Wu, Modi, and Ganwith the teachings of Shukla by gathering input using Retrieval Augmented Generation. Shukla provides as additional motivation for combination that Retrieval Augmented Generation (RAG) is used ([p. 2875 §IVC] "to enhance the model’s reliability and factual consistency by incorporating supplementary knowledge. RAG operates by taking an input and retrieving a collection of pertinent examples, which are then concatenated as context to the original prompt"). Claim 19 is rejected under U.S.C. §103 as being unpatentable over the combination of Du and Smit (“Should we be going MAD? A Look at Multi-Agent Debate Strategies for LLMs, 2023). Regarding claim 19, Du teaches The method as described in claim 1. However, Du doesn't explicitly teach, wherein the structured data object format is structured Javascript Object Notation (JSON). Smit, in the same field of endeavor, teaches The method as described in claim 1, wherein the structured data object format is structured Javascript Object Notation (JSON). ([p. 21] "Please strictly output in JSON format […] please output your answer in json format, with the format as follows:"). Du as well as Smit are directed towards deliberative multiagent LLM debate system. Therefore, Du as well as Smit are analogous art in the same field of endeavor. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of Du with the teachings of Smit by explicitly prompting for JSON output. One of ordinary skill in the art would recognize JSON as a structured machine-parsable language for inter-agent communication. Smit provides as additional motivation for combination ([p. 21] “You, as the moderator, will evaluate both sides’ answers and determine your preference for an answer candidate. Please summarize your reasons for supporting affirmative/negative side and give the final answer that you think is correct to conclude the debate. Now please output your answer in json format” [p. 1] “We build on these results to offer insights into improving debating strategies, such as adjusting agent agreement levels, which can significantly enhance performance and even surpass all other non-debate protocols”). Claim 20 is rejected under U.S.C. §103 as being unpatentable over the combination of Du and Modi and in further view of Smit. Regarding claim 20, the combination of Du, and Modi teaches The SaaS computing platform as described in claim 12. However, the combination of Du, and Modi doesn't explicitly teach, wherein the structured data object format is structured Javascript Object Notation (JSON). Smit, in the same field of endeavor, teaches the structured data object format is structured Javascript Object Notation (JSON).([p. 21] "Please strictly output in JSON format […] please output your answer in json format, with the format as follows:"). Du as well as Smit are directed towards deliberative multiagent LLM debate system. Therefore, Du as well as Smit are analogous art in the same field of endeavor. It would have been obvious before the effective filing date of the claimed invention to combine the teachings of Du with the teachings of Smit by explicitly prompting for JSON output. One of ordinary skill in the art would recognize JSON as a structured machine-parsable language for inter-agent communication. Smit provides as additional motivation for combination ([p. 21] “You, as the moderator, will evaluate both sides’ answers and determine your preference for an answer candidate. Please summarize your reasons for supporting affirmative/negative side and give the final answer that you think is correct to conclude the debate. Now please output your answer in json format” [p. 1] “We build on these results to offer insights into improving debating strategies, such as adjusting agent agreement levels, which can significantly enhance performance and even surpass all other non-debate protocols”). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Chan (“CHATEVAL: TOWARDS BETTER LLM-BASED EVALUATORS THROUGH MULTI-AGENT DEBATE”, 2023) is directed towards multi-agent (ensemble) LLM debate. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SIDNEY VINCENT BOSTWICK whose telephone number is (571)272-4720. The examiner can normally be reached M-F 7:30am-5:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Miranda Huang can be reached on (571)270-7092. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SIDNEY VINCENT BOSTWICK/Examiner, Art Unit 2124
Read full office action

Prosecution Timeline

Mar 11, 2025
Application Filed
Jul 03, 2025
Non-Final Rejection — §102, §103
Oct 08, 2025
Response Filed
Nov 03, 2025
Final Rejection — §102, §103
Feb 05, 2026
Request for Continued Examination
Feb 17, 2026
Response after Non-Final Action
Feb 25, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561604
SYSTEM AND METHOD FOR ITERATIVE DATA CLUSTERING USING MACHINE LEARNING
2y 5m to grant Granted Feb 24, 2026
Patent 12547878
Highly Efficient Convolutional Neural Networks
2y 5m to grant Granted Feb 10, 2026
Patent 12536426
Smooth Continuous Piecewise Constructed Activation Functions
2y 5m to grant Granted Jan 27, 2026
Patent 12518143
FEEDFORWARD GENERATIVE NEURAL NETWORKS
2y 5m to grant Granted Jan 06, 2026
Patent 12505340
STASH BALANCING IN MODEL PARALLELISM
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
52%
Grant Probability
90%
With Interview (+38.2%)
4y 7m
Median Time to Grant
High
PTA Risk
Based on 136 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month