Prosecution Insights
Last updated: April 19, 2026
Application No. 18/189,872

SYSTEMS AND METHODS FOR KNOWLEDGE EXTRACTION

Non-Final OA §101§102§103
Filed
Mar 24, 2023
Examiner
BHAT, VIBHA NARAYAN
Art Unit
2142
Tech Center
2100 — Computer Architecture & Software
Assignee
Xformics Inc.
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
4 currently pending
Career history
4
Total Applications
across all art units

Statute-Specific Performance

§101
28.6%
-11.4% vs TC avg
§103
35.7%
-4.3% vs TC avg
§102
14.3%
-25.7% vs TC avg
§112
21.4%
-18.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This office action is in response to the application filed on March 24, 2023. Claims 1-20 are pending and have been examined. Claims 1-20 are rejected. Information Disclosure Statement Acknowledgment is made of the information disclosure statements filed March 24, 2023, which comply with 37 CFR 1.97. As such, the information disclosure statements have been placed in the application file and the information referred to therein has been considered by the examiner. Claim Objections Claim 11 is objected to under 37 CFR 1.75 as being a substantial duplicate of Claim 9. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m). Claim 12 is objected to under 37 CFR 1.75 as being a substantial duplicate of Claim 10. When two claims in an application are duplicates or else are so close in content that they both cover the same thing, despite a slight difference in wording, it is proper after allowing one claim to object to the other as being a substantial duplicate of the allowed claim. See MPEP § 608.01(m). Claim Rejections - 35 USC § 101 35 U.S.C. § 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. According to the USPTO guidelines, a claim is directed to non-statutory subject matter if: Step 1: The claim does not fall within one of the four statutory categories of invention (process, machine, manufacture, or composition of matter), or, Step 2: The claim recites a judicial exception, e.g. an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis: Step 2A, Prong 1: Does the claim recite an abstract idea, law of nature, or natural phenomenon? Step 2A, Prong 2: Does the claim recite additional elements that integrate the judicial exception into a practical application? Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? MPEP 2106.04(a)(2)(I) states: “The mathematical concepts grouping is defined as mathematical relationships, mathematical formulas or equations, and mathematical calculations.” MPEP 2106.04(a)(2)(III) states: “Accordingly, the “mental processes” abstract idea grouping is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, judgements, and opinions. Further, the MPEP states: “The courts do not distinguish between mental processes that are performed entirely in the human mind and mental processes that require a human to use a physical aid (e.g. pen and paper or a slide run) to perform the claim limitation. Using the two-step inquiry, it is clear that Claims 1-20 are each directed to non-statutory subject matter as shown below: Please note the following: The following groups of claims are expressed in different statutory categories: Claims 1-7 are directed to a method for providing informative output from extracted features of a raw dataset. Claims 8-16 are directed to a system for providing informative output from extracted features of a raw dataset, comprised of at least one database, a processor, and a server containing at least one database configured to perform a set of operations. Claims 17-20 are directed to a non-transitory computer-readable medium storing computer-executable instructions which, when executed by a server in network communication with at least one database, cause the server to perform a set of operations. With respect to Claims 1, 8, and 17, which are independent claims with identical claim limitations: Step 1: Claim 1 is directed to a method, also known as a process, which is one of the four statutory categories of patentable subject matter. Claim 8 is directed to a system for providing informative output from a raw dataset, corresponding to an article of manufacture, which is one of the four statutory categories of patentable subject matter. Claim 17 is directed to a non-transitory computer readable medium on which computer-executable instructions are stored, corresponding to an article of manufacture, which is one of the four statutory categories of patentable subject matter. Step 2A, Prong 1: A judicial exception is recited in the claims as they recite mental processes, which are abstract ideas: “identifying, using a processor associated with the computer system, a trained machine-learning model configured to process data that shares a context associated with the raw dataset;”; Identifying a trained machine-learning model to process data that shares an associated context with the raw dataset covers concepts that could be practically performed in the human mind, including observation, evaluation, and judgement. Step 2A, Prong 2: The claims do not recite additional elements that integrate the judicial exception into a practical application: “receiving, at an application platform associated with a computer system, an upload of the raw dataset;”; Receiving an upload of a raw dataset recites a generic computer function of receiving data. Mere data gathering is considered insignificant extra-solution activity – see MPEP 2106.05(g). “applying, using the processor, the raw dataset to the trained machine-learning model;”; Applying a raw dataset to a trained machine learning model only amounts to “apply it” and mere instructions to implement an abstract idea on a computer – see MPEP 2106.05(f)(1). “receiving, from the trained machine-learning model, an output result;”; Receiving an output result from the trained machine learning model recites a generic computer function of receiving data. Mere data gathering is considered insignificant extra-solution activity – see MPEP 2106.05(g). “presenting, subsequent to the receiving, the output result on the application platform”; Presenting the output result on the application platform is post-solution activity, which is considered insignificant extra-solution activity – see MPEP 2106.05(g). Step 2B: The claims do not recite additional elements that amount to significantly more than the judicial exception. Providing input data and generating output data/results are well-understood, routine, and conventional activity of transmitting or receiving data over a network - see MPEP 2106.05(d). Applying a raw dataset to a trained machine learning model only amounts to “apply it” and mere instructions to implement an abstract idea on a computer - see MPEP 2106.05(f)(1). Therefore, Claims 1, 8, and 17 are directed to non-statutory subject matter and rejected. With respect to Claims 2, 9, and 11, which have identical claim limitations and are dependent upon Claims 1 and 8, respectively: Step 2A, Prong 1: A judicial exception is not recited in the claims as they do not recite an abstract idea (mathematical concepts, certain methods of organizing human activity, or mental processes), law of nature, or natural phenomenon. Step 2A, Prong 2: The claims do not recite additional elements that integrate the judicial exception into a practical application: “wherein identifying the trained machine-learning model comprises receiving, from a user, a selection on the trained machine-learning model from a plurality of trained machine-learning models, wherein each of the plurality of trained machine-learning models is associated with a unique context”; Receiving a selection of a machine learning model from a user recites a generic computer function of receiving data. Mere data gathering is considered insignificant extra-solution activity – see MPEP 2106.05(g). Step 2B: The claims do not recite additional elements that amount to significantly more than the judicial exception. Receiving input data and generating output data/results are well-understood, routine, and conventional activity of transmitting or receiving data over a network - see MPEP 2106.05(d). Therefore, Claims 2, 9, and 11 are directed to non-statutory subject matter and rejected. With respect to Claims 3, 10, 12, and 18 which have identical claim limitations and are dependent upon Claims 1, 8, and 17, respectively: Step 2A, Prong 1: A judicial exception is recited in the claims as they recite mental processes, which are abstract ideas: “deriving, upon an analysis of words contained in the raw dataset using the processor, the context associated with the raw dataset;”; Deriving context associated with a raw dataset covers concepts that could be practically performed in the human mind, including observation, evaluation, and judgement. “automatically selecting, based on the deriving, the trained machine-learning model”; Selecting a trained machine-learning model covers concepts that could be practically performed in the human mind, including observation, evaluation, and judgement. Step 2A, Prong 2: The claims do not recite additional elements that integrate the judicial exception into a practical application. Step 2B: The claims do not recite additional elements that amount to significantly more than the judicial exception. Therefore, Claims 3, 10, 12, and 18 are directed to non-statutory subject matter and rejected. With respect to Claims 4, 13, and 19, which have identical claim limitations and are dependent upon Claims 1, 8, and 17 respectively: Step 2A, Prong 1: A judicial exception is not recited in the claims as they do not recite an abstract idea (mathematical concepts, certain methods of organizing human activity, or mental processes), law of nature, or natural phenomenon. Step 2A, Prong 2: The claims do not recite additional elements that integrate the judicial exception into a practical application: “presenting, prior to application of the raw dataset to the identified trained machine-learning model, a template on the application platform;”; Presenting an output result (template) on an application platform is post-solution activity, which is considered insignificant extra-solution activity – see MPEP 2106.05(g). “receiving, from a user, one or more contextual parameter designations for the raw dataset;”; Receiving contextual parameter designations for a raw dataset recites a generic computer function of receiving data. Mere data gathering is considered insignificant extra-solution activity – see MPEP 2106.05(g). “applying, in conjunction with the raw dataset, the one or more contextual parameter designations to the trained machine-learning model”; Applying contextual parameter designations to a trained machine learning model only amounts to “apply it” and mere instructions to implement an abstract idea on a computer – see MPEP 2106.05(f)(1). Step 2B: The claims do not recite additional elements that amount to significantly more than the judicial exception. Providing/receiving input data and presenting/generating output data/results (templates) are well-understood, routine, and conventional activity of transmitting or receiving data over a network - see MPEP 2106.05(d). Applying one or more contextual parameter designations to a trained machine learning model only amounts to “apply it” and mere instructions to implement an abstract idea on a computer - see MPEP 2106.05(f)(1). Therefore, Claims 4, 13, and 19 are directed to non-statutory subject matter and rejected. With respect to Claims 5 and 14, which have identical claim limitations and are dependent upon Claims 1 and 8, respectively: Step 2A, Prong 1: A judicial exception is not recited in the claims as they do not recite an abstract idea (mathematical concepts, certain methods of organizing human activity, or mental processes), law of nature, or natural phenomenon. Step 2A, Prong 2: The claims do not recite additional elements that integrate the judicial exception into a practical application: “wherein the output result is a graph illustrating a relationship between elements contained in the raw dataset”; Presenting an output result, such as a graph, is post-solution activity, which is considered insignificant extra-solution activity – see MPEP 2106.05(g). Step 2B: The claims do not recite additional elements that amount to significantly more than the judicial exception. Presenting/generating output data/results (graphs) are well-understood, routine, and conventional activity of transmitting or receiving data over a network - see MPEP 2106.05(d). Therefore, Claims 5 and 14 are directed to non-statutory subject matter and rejected. With respect to Claims 6 and 15, which have identical claim limitations and are dependent upon Claims 4 and 14, respectively: Step 2A, Prong 1: A judicial exception is not recited in the claims as they do not recite an abstract idea (mathematical concepts, certain methods of organizing human activity, or mental processes), law of nature, or natural phenomenon. Step 2A, Prong 2: The claims do not recite additional elements that integrate the judicial exception into a practical application: “wherein the graph is one of: a cluster graph, a choropleth graph, a bar graph, and a line graph”; A graph is considered post-solution activity, which is also considered insignificant extra-solution activity – see MPEP 2106.05(g). Step 2B: The claims do not recite additional elements that amount to significantly more than the judicial exception. Presenting output results, like types of graphs, are well-understood, routine, and conventional activity of transmitting or receiving data over a network - see MPEP 2106.05(d). Therefore, Claims 6 and 15 are directed to non-statutory subject matter and rejected. With respect to Claims 7, 16, and 20 which have identical claim limitations and are dependent upon Claims 1, 8, and 17, respectively: Step 2A, Prong 1: A judicial exception is not recited in the claims as they do not recite an abstract idea (mathematical concepts, certain methods of organizing human activity, or mental processes), law of nature, or natural phenomenon. Step 2A, Prong 2: The claims do not recite additional elements that integrate the judicial exception into a practical application: “wherein the output result corresponds to a suggestion to adjust one or more activities of an organization that produces the raw dataset to improve an efficiency of the organization”; Presenting an output result, such as a suggestion to adjust activities of an organization, is post-solution activity, which is considered insignificant extra-solution activity – see MPEP 2106.05(g). Step 2B: The claims do not recite additional elements that amount to significantly more than the judicial exception. Presenting/generating output data/results (suggestions) are well-understood, routine, and conventional activity of transmitting or receiving data over a network - see MPEP 2106.05(d). Therefore, Claims 7, 16, and 20 are directed to non-statutory subject matter and rejected. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e. changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-4, 8-13, and 17-19 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Polleri et al., (U.S. Patent Application Publication No. US-11556862-B2, hereinafter “Polleri”). Polleri was filed on 6/4/2020, and this date is before the earliest effective filing date of this application, i.e., 3/24/2023. Therefore, Polleri constitutes prior art under 35 U.S.C. 102(a)(2). With respect to Claims 1, 8, and 17: Polleri teaches: “receiving, at an application platform associated with a computer system, an upload of the raw dataset;” (Column 8, Lines 29-34, discloses the receiving of a user input through an interface (application platform) where the user can identify one or more locations of data (i.e. a raw data set).) “identifying, using a processor associated with the computer system, a trained machine-learning model configured to process data that shares a context associated with the raw dataset;” (Column 8, Lines 47-61, discloses a user inputting a specification for a type of problem they’d like to implement a machine learning solution for. The application can translate the native language inputted to understand the goals of the machine learning model. Such techniques can recognize keywords in the native language to recommend or select a particular machine learning algorithm.) “applying, using the processor, the raw dataset to the trained machine-learning model;” (Column 11, Lines 6-9, discloses the generated machine learning model using training data to train the machine learning model to the desired performance parameters (raw dataset).) “receiving, from the trained machine-learning model, an output result;” (Column 46, Lines 44-48, discloses the generation of an output from the machine learning model.) “and presenting, subsequent to the receiving, the output result on the application platform” (Column 13, Lines 39-40, discloses the machine learning platform (application) can provides results of the model to the user.) Therefore, Claims 1, 8, and 17 are rejected. With respect to Claims 2, 9, and 11: Polleri teaches: “wherein identifying the trained machine-learning model comprises receiving, from a user, a selection on the trained machine-learning model from a plurality of trained machine-learning models, wherein each of the plurality of trained machine-learning models is associated with a unique context” (Columns 8 and 9, Lines 62-67 and 1-3, disclose a user can choose the type of problem they want to solve through a graphical user interface, where several generic machine learning models are then displayed back to the user for selection. The user can then select one of the generic models or a customer model to the solve the problem received as the second input.) Therefore, Claims 2, 9, and 11 are rejected. With respect to Claims 3, 10, 12, and 18: Polleri teaches: “wherein identifying the trained machine-learning model comprises: deriving, upon an analysis of words contained in the raw dataset using the processor, the context associated with the raw dataset;” (Column 8, Lines 50-59, discloses the input of the problem as native language text or speech, wherein the technique can decipher the native language to understand the goals (context) of the machine learning model associated with a wide variety of problem types, such as “classification, regression, product recommendations, medical diagnosis, financial analysis, predictive maintenance, image and sound recognition, text recognition, and tabular data analysis.”) “automatically selecting, based on the deriving, the trained machine-learning model” (Column 8, Lines 59-61, discloses the technique can then recognize one or more keywords in the native language to select or recommend a particular machine learning algorithm.) Therefore, Claims 3, 10, 12, and 18 are rejected. With respect to Claims 4, 13, and 19: Polleri teaches: “presenting, prior to application of the raw dataset to the identified trained machine-learning model, a template on the application platform;” (Column 8, Lines 24-33, discloses an interface for the user to interact with, which can include a graphical user interface on a touchscreen display (application platform). The user can use the interface to identify the locations of the data that will be used for generating the machine learning model.) “receiving, from a user, one or more contextual parameter designations for the raw dataset;” (Column 8, Lines 47-50, discloses a second user input through the input of text via a user interface that can specify a type of problem that the user would like to implement the machine learning for.) “applying, in conjunction with the raw dataset, the one or more contextual parameter designations to the trained machine-learning model” (Columns 11, Lines 6-9, discloses a generated machine learning model that can use the training data to train the machine learning model to the desired performance parameters.) Therefore, Claims 4, 13, and 19 are rejected. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e. changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claim(s) 5-7, 14-15, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Polleri et al., (U.S. Patent Application Publication No. US-11556862-B2 filed on 6/4/2020, hereinafter “Polleri”), in view of Anisingaraju et al., (U.S. Patent Application Publication No. US-20160203217-A1 filed on 12/19/2015, hereinafter “Anisingaraju”). With respect to Claims 5 and 14: Polleri does not appear to explicitly disclose: “wherein the output result is a graph illustrating a relationship between elements contained in the raw dataset” However, Anisingaraju teaches: “wherein the output result is a graph illustrating a relationship between elements contained in the raw dataset” (Paragraph 0047 discloses a data presentation subcomponent that relates to how to present the data to the user, which can include traditional or advanced visualization methods such as infographics, maps, and advanced charts (graphs).) It would have been obvious to a person having ordinary skill in the art (PHOSITA) before the effective filing date of the present application to implement Claims 5 and 14 that utilized the teachings of Polleri and the teachings of Anisingaraju, which are both in the same field of invention. A PHOSITA would have been motivated to combine Polleri’s method of receiving a dataset, applying a trained machine learning model to the dataset, and generating/presenting an output result through an application platform with Anisingaraju’s method of generating advanced graphical visualizations that illustrate the relationships found between data elements. This would provide users with a more visual and easier to understand way of comprehending analytical results using data visualization techniques. Therefore, Claims 5 and 14 are rejected. With respect to Claims 6 and 15: Polleri does not appear to explicitly disclose: “wherein the graph is one of: a cluster graph, a choropleth graph, a bar graph, and a line graph” However, Anisingaraju teaches: “wherein the graph is one of: a cluster graph, a choropleth graph, a bar graph, and a line graph” (Paragraph 0047 discloses how the data is presented to the user through traditional or advanced visualization methods such as infographics, maps, and advanced charts (graphs).) It would have been obvious to a person having ordinary skill in the art (PHOSITA) before the effective filing date of the present application to implement Claims 6 and 15 that utilized the teachings of Polleri and the teachings of Anisingaraju, which are both in the same field of invention. A PHOSITA would have been motivated to combine Polleri’s method of receiving a dataset, applying a trained machine learning model to the dataset, and generating/presenting an output result through an application platform with Anisingaraju’s method of generating different types of visualization methods, which can include infographics, maps, and advanced charts. This would allow the output from Polleri to be one of a variety of graph types from Anisingaraju, which would expand the visualization design choice availability of the data being presented and improve how different data relationships can be shown. Therefore, Claims 6 and 15 are rejected. With respect to Claims 7, 16, and 20: Polleri does not appear to explicitly disclose: “wherein the output result corresponds to a suggestion to adjust one or more activities of an organization that produces the raw dataset to improve an efficiency of the organization” However, Anisingaraju teaches: “wherein the output result corresponds to a suggestion to adjust one or more activities of an organization that produces the raw dataset to improve an efficiency of the organization” (Paragraphs 0095 and 0096 disclose an Insight Generation Engine (IGE) that ingests data from various sources to create aggregated data, which is then processes using natural language processing (NPL) to attach attributes and contributors to the data. These attributes and contributors can be in the form of topics (i.e. topics specified to be important to an organization) and can further be processed to provide actionable insights and recommendations to improve the organization.) It would have been obvious to a person having ordinary skill in the art (PHOSITA) before the effective filing date of the present application to implement Claims 6 and 15 that utilized the teachings of Polleri and the teachings of Anisingaraju, which are both in the same field of invention. A PHOSITA would have been motivated to combine Polleri’s method of receiving a dataset associated with an organization, applying a trained machine learning model to the dataset, and generating/presenting an output result through an application platform with Anisingaraju’s method of outputting a suggestion or actionable insight to adjust one or more organizational activities to improve efficiency and performance. It is common to present outputs consisting of actionable recommendations based on analyzed data within business intelligence systems and applications to enhance operational performance. Therefore, Claims 7, 16, and 20 are rejected. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Vibha Bhat whose telephone number is (571)-272-7091. The examiner can normally be reached on Monday – Thursday from 8:00 AM to 5:00 PM EST and every other Friday from 8:00 AM to 4:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. See MPEP § 713.01. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at https://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mariela Reyes, can be reached at telephone number (571)-270-1006. The fax phone number for the organization where this application or proceeding is assigned is (571)-273-8300. Information regarding the status of an application may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or (572)-272-1000. /Vibha Bhat/Examiner Art Unit 2142 /Mariela Reyes/Supervisory Patent Examiner, Art Unit 2142
Read full office action

Prosecution Timeline

Mar 24, 2023
Application Filed
Feb 11, 2026
Non-Final Rejection — §101, §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month