Prosecution Insights
Last updated: April 19, 2026
Application No. 18/216,281

Generative Artificial Intelligence as a Personal Task Generator to Complete Objectives

Non-Final OA §101§103
Filed
Jun 29, 2023
Examiner
ANDERSON, SCOTT C
Art Unit
3694
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
State Farm Mutual Automobile Insurance Company
OA Round
1 (Non-Final)
58%
Grant Probability
Moderate
1-2
OA Rounds
2y 7m
To Grant
89%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
595 granted / 1024 resolved
+6.1% vs TC avg
Strong +31% interview lift
Without
With
+30.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
38 currently pending
Career history
1062
Total Applications
across all art units

Statute-Specific Performance

§101
36.2%
-3.8% vs TC avg
§103
31.5%
-8.5% vs TC avg
§102
12.1%
-27.9% vs TC avg
§112
17.7%
-22.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1024 resolved cases

Office Action

§101 §103
DETAILED ACTION This Office action is in reply to application no. 18/216,281, filed 29 June 2023. Claims 1-20 are pending and are considered below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims lie within statutory categories of invention, as each is directed to a system (machine), method (process) or medium storing non-transitory instructions (manufacture). The claim(s) recite(s) data gathering (receiving a request from a user), sending information to an external object (an ML model) in no particular way but simply for a particular purpose (in order for the model to divide an objective into tasks and create instructions for performing the tasks), and providing output (communicating the instructions to a user). First, as the entire point of this is to provide instructions to a user, it recites certain methods of organizing human activity; see MPEP § 2106.04(a). Second, in the absence of computers, these are steps that can be performed mentally and verbally: a clerk at a business can receive a verbal inquiry from a customer as to how to perform a certain process, can mentally determine the required steps, and can communicate the steps to the customer, also verbally. None of this presents any practical difficulty and none requires any technology at all. This judicial exception is not integrated into a practical application because aside from the bare inclusion of a generic computer, and communicating with an external (but unclaimed) device capable of machine learning, nothing is done beyond what was set forth above, which does not go beyond, at most, generally linking the abstract idea to the technological environment of generic, AI-enabled computers. See MPEP § 2106.05(h). As the claims only manipulate data concerning instructions for how a user should perform some type of process, they do not improve the “functioning of a computer” or of “any other technology or technical field”. See MPEP § 2106.05(a). They do not apply the abstract idea “with, or by use of a particular machine”, MPEP § 2106.05(b), as the below-cited Guidance is clear that a generic computer is not the particular machine envisioned. They do not effect a “transformation or reduction of a particular article to a different state or thing”, MPEP § 2106.05(c). First, such data, being intangible, are not a particular article at all. Second, the claimed manipulation is neither transformative nor reductive; as the courts have pointed out, in the end, data are still data. They do not apply the abstract idea “in some other meaningful way beyond generally linking [it] to a particular technological environment”, MPEP § 2106.05(e), as the lack of technical and algorithmic detail in the claims is so as not to go beyond such a general linkage. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional claim limitations, considered individually and as an ordered combination, are insufficient to elevate an otherwise-ineligible claim. Claim 1, which has the most, includes a processor and memory storing instructions. These elements are recited at a high degree of generality and the specification is clear, ¶ 164, that nothing more than a “general-purpose processor” is required. It only performs generic computer functions of nondescriptly manipulating information and sharing information with persons and/or other devices. Generic computers performing generic computer functions, without an inventive concept, do not amount to significantly more than the abstract idea. The type of information being manipulated does not impose meaningful limitations or render the idea less abstract. The use of machine learning is not positively claimed as being within the scope of any embodiment, but even if it were otherwise, in light of Recentive1, the use of known machine learning techniques in a new data environment is not per se sufficient to confer patent eligibility. The claim elements when considered as an ordered combination – that is, a generic computer performing a chronological sequence of abstract steps while, at most, making use of existing AI techniques – do nothing more than when they are analyzed individually. The other independent claims are simply different embodiments but are likewise directed to a generic computer performing, essentially, the same process. The dependent claims further do not amount to significantly more than the abstract idea: claims 2-4, 9-11 and 16-18 are simply further descriptive of the type of information being manipulated, and claims 5-7, 12-14, 19 and 20 simply recite further input and output. The claims are not patent eligible. For further guidance please see MPEP § 2106.03 – 2106.07(c) (formerly referred to as the “2019 Revised Patent Subject Matter Eligibility Guidance”, 84 Fed. Reg. 50, 55 (7 January 2019)). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 8 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Leeds et al. (U.S. Patent No. 11,989,636, filed 20 July 2022) in view of Kim (U.S. Publication No. 2007/0233535) In-line citations are to Leeds. With regard to Claim 1: Leeds teaches: A computer system for personalized planning, the computer system comprising: one or more processors; [Col. 14, line 30; processors are used] a memory [Col. 24, line 63; “memory”] storing executable instructions thereon that, when executed by the one or more processors, [Col. 28, line 7; a “network of software” is provided in, line 5, a “Machine Learning Unit”] cause the one or more processors to: receive from a user a request [Col. 29, lines 4, 7; a “participant” may make “requests”] for personalized assistance with an objective; send an identification of the user and a prompt for the personalized assistance with the objective to a machine learning (ML) chatbot to cause an ML model… [Col. 23, lines 37-39; a “Conversational Personal Digital Assistant” may perform “the function of a helper” and assist with “scheduling” using a CCAI which, see abstract, uses “units of machine learning”; Col. 9, line 32; it includes a “prompter”; Col. 2, line 48; it uses “chatbot technology” to perform the communication steps; Col. 23, line 35; it maintains “participant identity info”] and generate personalized instructions… [Col. 31, line 48; it provides “education and instruction”] receive the personalized instructions… from the ML chatbot, and communicate the personalized instructions… to the user. [Col. 32, line 2; the information is provided using “conversational systems”] Leeds does not explicitly teach divide the objective into one or more discrete steps, or that instructions are for performing the one or more discrete steps, but it is known in the art. Kim teaches a business process management system [title] for “scheduling”. [abstract] It works by “dividing a project into tasks” and scheduling the tasks to specific users. [claim 8] The steps are performed based on user requests, [0015] and the system may manage instructions. [0066] Kim and Leeds are analogous art as each is directed to electronic means for managing instructions and requests and providing schedules. It would have been obvious to one of ordinary skill in the art just prior to the filing of the claimed invention to combine the teaching of Kim with that of Leeds in order to improve user convenience, as taught by Kim; [0010] further, it is simply a substitution of one known part for another with predictable results, simply providing information at the task level as in Kim rather than the level of Leeds; the substitution produces no new and unexpected result. In this and the subsequent claims, that instructions provided to a user are “for performing the one or more discrete steps” consists entirely of nonfunctional printed matter which bears no functional relation to the substrate and so is considered but given no patentable weight. The ML chatbot and ML model are not positively claimed as being within the scope of the invention so limitations ascribed to them are considered but given no patentable weight. The references are provided for the purpose of compact prosecution. With regard to Claim 8: Leeds teaches: A computer-implemented method for personalized planning, the method comprising: receiving from a user a request [Col. 29, lines 4, 7; a “participant” may make “requests”] for personalized assistance with an objective; sending an identification of the user and a prompt for the personalized assistance with the objective to a machine learning (ML) chatbot to cause an ML model… [Col. 23, lines 37-39; a “Conversational Personal Digital Assistant” may perform “the function of a helper” and assist with “scheduling” using a CCAI which, see abstract, uses “units of machine learning”; Col. 9, line 32; it includes a “prompter”; Col. 2, line 48; it uses “chatbot technology” to perform the communication steps; Col. 23, line 35; it maintains “participant identity info”] and generate personalized instructions… [Col. 31, line 48; it provides “education and instruction”] receiving the personalized instructions… from the ML chatbot, and communicating the personalized instructions… to the user. [Col. 32, line 2; the information is provided using “conversational systems”] Leeds does not explicitly teach divide the objective into one or more discrete steps, or that instructions are for performing the one or more discrete steps, but it is known in the art. Kim teaches a business process management system [title] for “scheduling”. [abstract] It works by “dividing a project into tasks” and scheduling the tasks to specific users. [claim 8] The steps are performed based on user requests, [0015] and the system may manage instructions. [0066] Kim and Leeds are analogous art as each is directed to electronic means for managing instructions and requests and providing schedules. It would have been obvious to one of ordinary skill in the art just prior to the filing of the claimed invention to combine the teaching of Kim with that of Leeds in order to improve user convenience, as taught by Kim; [0010] further, it is simply a substitution of one known part for another with predictable results, simply providing information at the task level as in Kim rather than the level of Leeds; the substitution produces no new and unexpected result. With regard to Claim 15: Leeds teaches: A computer readable storage medium storing non-transitory computer readable instructions for personalized planning, wherein the instructions when executed on one or more processors [Col. 14, line 30; processors are used; Col. 24, line 63; “memory”; Col. 28, line 7; a “network of software” is provided in, line 5, a “Machine Learning Unit”] cause the one or more processors to: receive from a user a request [Col. 29, lines 4, 7; a “participant” may make “requests”] for personalized assistance with an objective; send an identification of the user and a prompt for the personalized assistance with the objective to a machine learning (ML) chatbot to cause an ML model… [Col. 23, lines 37-39; a “Conversational Personal Digital Assistant” may perform “the function of a helper” and assist with “scheduling” using a CCAI which, see abstract, uses “units of machine learning”; Col. 9, line 32; it includes a “prompter”; Col. 2, line 48; it uses “chatbot technology” to perform the communication steps; Col. 23, line 35; it maintains “participant identity info”] and generate personalized instructions… [Col. 31, line 48; it provides “education and instruction”] receive the personalized instructions… from the ML chatbot, and communicate the personalized instructions… to the user. [Col. 32, line 2; the information is provided using “conversational systems”] Leeds does not explicitly teach divide the objective into one or more discrete steps, or that instructions are for performing the one or more discrete steps, but it is known in the art. Kim teaches a business process management system [title] for “scheduling”. [abstract] It works by “dividing a project into tasks” and scheduling the tasks to specific users. [claim 8] The steps are performed based on user requests, [0015] and the system may manage instructions. [0066] Kim and Leeds are analogous art as each is directed to electronic means for managing instructions and requests and providing schedules. It would have been obvious to one of ordinary skill in the art just prior to the filing of the claimed invention to combine the teaching of Kim with that of Leeds in order to improve user convenience, as taught by Kim; [0010] further, it is simply a substitution of one known part for another with predictable results, simply providing information at the task level as in Kim rather than the level of Leeds; the substitution produces no new and unexpected result. Claim(s) 2, 3, 9, 10, 16 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Leeds et al. in view of Kim further in view of Speasl et al. (U.S. Publication No. 2021/0312561). Claims 2, 9 and 16 are similar so are analyzed together. With regard to Claim 2: The computer system of claim 1, wherein the user is an insurance policyholder, the objective comprises filing an insurance claim, and the personalized instructions comprise loss documentation steps. With regard to Claim 9: The computer-implemented method of claim 8, wherein the user is an insurance policyholder, the objective comprises filing an insurance claim, and the personalized instructions comprise loss documentation steps. With regard to Claim 16: The computer readable storage medium of claim 15, wherein the user is an insurance policyholder, the objective comprises filing an insurance claim, and the personalized instructions comprise loss documentation steps. Leeds and Kim teach the system of claim 1, method of claim 8, and medium of claim 15, but do not explicitly teach this data, but in addition to being of no patentable significance as nonfunctional, descriptive language, it is known in the art. Speasl teaches an authentication system [title] that may “assist in [a] user filing an insurance claim in response to a loss”. [0032] A user may “send other supporting documents” such as “estimates of financial loss, engineering reports, and police reports from [a] mobile application”. [id.] The “document produced” may be “an insurance claim document”. [0088] The “user submitted forms” may be viewable on a “web portal”. [0068] The system may capture “digital media of damages” from an “insurance claim adjuster”. [0033] Speasl and Leeds are analogous art as each is directed to electronic means for conducting dialogues with users. It would have been obvious to one of ordinary skill in the art just prior to the filing of the claimed invention to combine the teaching of Speasl with that of Leeds and Kim in order to ensure accuracy of data, as taught by Speasl; [0002] further, it is simply a substitution of known parts for others with predictable results, simply interpreting data in the manner of Speasl rather than that of Leeds; the substitution produces no new and unexpected result. With regard to Claim 3: The computer system of claim 1, wherein the user is an insurance claims adjuster, the objective comprises assessing an insurance claim, and the personalized instructions comprise damage documentation and cost estimation steps. [Speasl, as cited above in regard to claim 2] This claim is not patentably distinct from claim 1 as it consists entirely of nonfunctional, descriptive language, disclosing at most human interpretation of data but which imparts neither structure nor functionality to the claimed embodiment. The reference is provided for the purpose of compact prosecution. With regard to Claim 10: The computer-implemented method of claim 8, wherein the user is an insurance claims adjuster, the objective comprises assessing an insurance claim, and the personalized instructions comprise damage documentation and cost estimation steps. [Speasl, as cited above in regard to claim 9] This claim is not patentably distinct from claim 8 as it consists entirely of nonfunctional, descriptive language, disclosing at most human interpretation of data but which imparts neither structure nor functionality to the claimed embodiment. The reference is provided for the purpose of compact prosecution. With regard to Claim 17: The computer readable storage medium of claim 15, wherein the user is an insurance claims adjuster, the objective comprises assessing an insurance claim, and the personalized instructions comprise damage documentation and cost estimation steps. [Speasl, as cited above in regard to claim 16] This claim is not patentably distinct from claim 15 as it consists entirely of nonfunctional, descriptive language, disclosing at most human interpretation of data but which imparts neither structure nor functionality to the claimed embodiment. The reference is provided for the purpose of compact prosecution. Claim(s) 4, 11 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Leeds et al. in view of Kim further in view of Singh et al. (U.S. Patent No. 11,538,112). These claims are similar so are analyzed together. With regard to Claim 4: The computer system of claim 1, wherein the objective comprises a completion deadline and the personalized instructions comprise target dates for completing the one or more discrete steps. With regard to Claim 11: The computer-implemented method of claim 8, wherein the objective comprises a completion deadline and the personalized instructions comprise target dates for completing the one or more discrete steps. With regard to Claim 18: The computer readable storage medium of claim 15, wherein the objective comprises a completion deadline and the personalized instructions comprise target dates for completing the one or more discrete steps. Leeds and Kim teach the system of claim 1, method of claim 8, and medium of claim 15, but do not explicitly teach this data, but in addition to being of no patentable significance as nonfunctional, descriptive language, it is known in the art. Singh teaches a machine learning method for processing data related to anticipated payments. [abstract] The payments may be related to “insurance claim forms”. [Col. 4, line 29] Individual dates are recorded and a “deadline date” is provided. [Col. 4, lines 64-65; Col. 5, line 2] The system may “suggest further action(s)” that a user may take in order to “increase likelihood of claim success”. [Col. 2, lines 21, 23] Singh and Leeds are analogous art as each is directed to the use of machine learning and providing suggestions or instructions to users. It would have been obvious to one of ordinary skill in the art just prior to the filing of the claimed invention to combine the teaching of Singh with that of Leeds and Kim in order to improve the likelihood of success, as taught by Singh; further, it is simply a substitution of known parts for others with predictable results, simply interpreting data in the manner of Singh rather than that of Leeds; the substitution produces no new and unexpected result. Claim(s) 5-7, 12-14, 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Leeds et al. in view of Kim further in view of Mandal (U.S. Publication No. 2020/0053117). Claims 5, 12 and 19 are similar so are analyzed together. With regard to Claim 5: The computer system of claim 1, wherein the instructions, when executed by the one or more processors, further cause the one or more processors to: receive an authorization from the user to track a progress of the user in completing the one or more discrete steps, monitor the progress of the user, and communicate the progress to the user. With regard to Claim 12: The computer-implemented method of claim 8 further comprising: receiving an authorization from the user to track a progress of the user in completing the one or more discrete steps, monitoring the progress of the user, and communicating the progress to the user. With regard to Claim 19: The computer readable storage medium of claim 15, wherein the instructions, when executed by the one or more processors, further cause the one or more processors to: receive an authorization from the user to track a progress of the user in completing the one or more discrete steps, monitor the progress of the user, and communicate the progress to the user. Leeds and Kim teach the system of claim 1, method of claim 8, and medium of claim 15, but do not explicitly teach this progress reporting, but it is known in the art. Mandal teaches a threat management system [title] that tracks the “progress” of a process. [0049] Based on an “authorization module”, the progress may be tracked “to completion” [0052] the progress may be “reported”. [0051] The selection controls of the authorization module may have been selected by a user. [abstract] Mandal and Leeds are analogous art as each is directed to electronic means for managing a multi-step process. It would have been obvious to one of ordinary skill in the art just prior to the filing of the claimed invention to combine the teaching of Mandal with that of Leeds and Kim in order to comply with regulations, as taught by Mandal; [0005] further, it is simply a combination of known parts with predictable results, simply performing Mandal’s steps at any time during Leeds’ process. Each part works independently of the other, and each works in combination identically to how it works when not combined, with no new and unexpected result inherent or disclosed. With regard to Claim 6: The computer system of claim 5, wherein the instructions, when executed by the one or more processors, further cause the one or more processors to: send the progress of the user and a prompt for updated discrete steps to the ML chatbot to cause the ML model to revise the one or more discrete steps into one or more updated discrete steps and generate updated personalized instructions for performing the one or more updated discrete steps, receive the updated personalized instructions for performing the one or more updated discrete steps from the ML chatbot, and communicate the updated personalized instructions to the user. [Leeds, Col. 17, lines 1-2; the system performs updates based on outcomes which have been communicated; given the established combination above, this is an obvious modification as it simply requires applying Leeds’ updates to the established instructions] With regard to Claim 7: The computer system of claim 5, wherein the instructions, when executed by the one or more processors, further cause the one or more processors to: send the progress of the user and a prompt for progress analysis to the ML chatbot to cause the ML model to generate an alert if the ML model determines, based upon the progress of the user, that a target date will not be met, receive the alert from the ML chatbot, and communicate the alert to the user. [Kim, 0016; tasks may need to be “completed by a specific due date”; 0085; time limits may be provided] This claim is not patentably distinct from claim 5. The data sent to the ML chatbot and the alert sent to the user are nonfunctional printed matter with respect to the claimed substrate and so are considered but given no patentable weight. The ML chatbot and model are not positively recited as being within the scope of any claimed embodiment so limitations ascribed to them are of no patentable significance. The reference is provided for the purpose of compact prosecution. With regard to Claim 13: The computer-implemented method of claim 12 further comprising: sending the progress of the user and a prompt for updated discrete steps to the ML chatbot to cause the ML model to revise the one or more discrete steps into one or more updated discrete steps and generate updated personalized instructions for performing the one or more updated discrete steps, receiving the updated personalized instructions for performing the one or more updated discrete steps from the ML chatbot, and communicating the updated personalized instructions to the user. [Leeds, Col. 17, lines 1-2; the system performs updates based on outcomes which have been communicated; given the established combination above, this is an obvious modification as it simply requires applying Leeds’ updates to the established instructions] With regard to Claim 14: The computer-implemented method of claim 12 further comprising: sending the progress of the user and a prompt for progress analysis to the ML chatbot to cause the ML model to generate an alert if the ML model determines, based upon the progress of the user, that a target date will not be met, receiving the alert from the ML chatbot, and communicating the alert to the user. [Kim, 0016; tasks may need to be “completed by a specific due date”; 0085; time limits may be provided] This claim is not patentably distinct from claim 12. The data sent to the ML chatbot and the alert sent to the user are nonfunctional printed matter with respect to the claimed substrate and so are considered but given no patentable weight. The ML chatbot and model are not positively recited as being within the scope of any claimed embodiment so limitations ascribed to them are of no patentable significance. The reference is provided for the purpose of compact prosecution. With regard to Claim 20: The computer readable storage medium of claim 15, wherein the instructions, when executed by the one or more processors, further cause the one or more processors to: send the progress of the user and a prompt for updated discrete steps to the ML chatbot to cause the ML model to revise the one or more discrete steps into one or more updated discrete steps and generate updated personalized instructions for performing the one or more updated discrete steps, receive the updated personalized instructions for performing the one or more updated discrete steps from the ML chatbot, and communicate the updated personalized instructions to the user. [Leeds, Col. 17, lines 1-2; the system performs updates based on outcomes which have been communicated; given the established combination above, this is an obvious modification as it simply requires applying Leeds’ updates to the established instructions] Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SCOTT C ANDERSON whose telephone number is (571)270-7442. The examiner can normally be reached M-F 9:00 to 5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bennett Sigmond can be reached at (303) 297-4411. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SCOTT C ANDERSON/Primary Examiner, Art Unit 3694 1 Recentive Analytics, Inc. v. Fox Corp. et al., 134 F.4th 1205, 1216 (Fed. Cir. 2025)
Read full office action

Prosecution Timeline

Jun 29, 2023
Application Filed
Jan 30, 2026
Non-Final Rejection — §101, §103
Apr 13, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602730
Machine-Learning Driven Data Analysis Based on Demographics, Risk, and Need
2y 5m to grant Granted Apr 14, 2026
Patent 12603165
PRESCRIPTION DRUG PRICING AND ADJUDICATION SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12597031
METHODS AND SYSTEMS FOR DETECTING SUSPICIOUS OR NON-SUSPICIOUS ACTIVITIES INVOLVING A MOBILE DEVICE USE
2y 5m to grant Granted Apr 07, 2026
Patent 12585844
REACH AND FREQUENCY PREDICTION FOR DIGITAL COMPONENT TRANSMISSIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12586135
SYSTEMS AND METHODS FOR LIGHT DETECTION AND RANGING (LIDAR) BASED GENERATION OF A HOMEOWNERS INSURANCE QUOTE
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
58%
Grant Probability
89%
With Interview (+30.9%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 1024 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month