Prosecution Insights
Last updated: April 19, 2026
Application No. 18/486,270

COMPUTER IMPLEMENTED TECHNIQUES FOR SAMPLE OPTIMIZATION

Non-Final OA §101
Filed
Oct 13, 2023
Examiner
TOMASZEWSKI, MICHAEL
Art Unit
3681
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Optum Services (Ireland) Limited
OA Round
3 (Non-Final)
47%
Grant Probability
Moderate
3-4
OA Rounds
2y 11m
To Grant
70%
With Interview

Examiner Intelligence

Grants 47% of resolved cases
47%
Career Allow Rate
271 granted / 572 resolved
-4.6% vs TC avg
Strong +23% interview lift
Without
With
+23.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
27 currently pending
Career history
599
Total Applications
across all art units

Statute-Specific Performance

§101
53.3%
+13.3% vs TC avg
§103
35.9%
-4.1% vs TC avg
§102
1.8%
-38.2% vs TC avg
§112
4.9%
-35.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 572 resolved cases

Office Action

§101
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 2. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2/19/2026 has been entered. Notice to Applicant 3. This communication is in response to the communication filed 2/19/2026. Claims 9, 16-17, and 22 are cancelled. Claims 1, 11, 18-20, and 23 are currently amended. Claims 1-8, 10-15, 18-21, and 23 are currently pending. Claim Rejections - 35 USC § 101 4. 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 4.1. Claims 1-8, 10-15, 18-21, and 23 are rejected under 35 U.S.C. § 101 because while the claims (1) are to a statutory category (i.e., process, machine, manufacture or composition of matter, the claims (2A1) recite an abstract idea (i.e., a law of nature, a natural phenomenon); (2A2) do not recite additional elements that integrate the abstract idea into a practical application; and (2B) are not directed to significantly more than the abstract idea itself. In regards to (1), the claims are to a statutory category (i.e., statutory categories including a process, machine, manufacture or composition of matter). In particular, independent claims 1, 11 and 18, and their respective dependent claims are directed, in part, to methods and systems for determining an optimal sample size for a contract. In regards to (2A1), the claims, as a whole, recite and are directed to an abstract idea because the claims include one or more limitations that correspond to an abstract idea including mental processes and/or certain methods of organizing human activity which encompasses both certain activity of a single person, certain activity that involves multiple people, and certain activity between a person and a computer. For example, independent claims 1, 11 and 18, as a whole, are directed to determining an optimal sample size for a contract by, in part, receiving data, determining a sampling distribution, determining a significance threshold, determining a probability, determining an expected star value, and presenting the expected star value which are human activities and/or interactions and therefore, certain methods of organizing human activity which encompasses both certain activity of a single person, certain activity that involves multiple people, and certain activity between a person and a computer. The dependent claims include all of the limitations of their respective independent claims and thus are directed to the same abstract idea identified for the independent claims but further describe the elements and/or recite field of use limitations. Furthermore, assuming arguendo, the claims are not directed to certain methods of organizing human activities, the claims, nevertheless, are directed to an abstract idea because the claims, except for certain limitations (* identified below in bold), under the broadest reasonable interpretation, can be reasonably and practically performed in the human mind and/or with pen and paper using observation, evaluation, judgment and/or opinion. That is, other than reciting the certain additional elements, nothing in the claims precludes the limitations from being practically performed in the mind and/or with pen and paper. CLAIM 1: A system, comprising: one or more processors; and one or more non-transitory computer readable media storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising: receiving electronic response data from data collection objects, wherein the electronic response data includes electronic data representing response scores to queries in the data collection objects; determining a sampling distribution for each of one or more sample sizes based on the response scores; determining a significance threshold and a reliability coefficient for each of the one or more sample sizes, the reliability coefficient being output in an electronic form suitable for comparison with an adjustment grid, the adjustment grid being representable as columns and rows that indicate quantification of reliability and/or population distributions; determining a probability for each of one or more performance measure values for the one or more sample sizes based on a cumulative distributive function (CDF) of the sampling distribution and one or more parameters of the adjustment grid; determining an expected performance measure value for each of the one or more sample sizes based on the probability determined for each of the one or more performance measure values; and causing a presentation of the expected performance measure value determined for each of the one or more sample sizes in a user interface of a device, wherein at least one of the one or more sample sizes with a highest expected performance measure value is recommended, by automatically displaying the at least one of the one or more sample sizes as an optimal sample size; receiving an update to at least one of the plurality of different performance measures; determining a change in the highest expected performance value based on the update to the at least one of the plurality of different performance measures; and causing the user interface to update in accordance with the change in the highest expected performance measure value and display one or more of the updated plurality of different performance measures, the user interface including interactive areas that present the plurality of different performance measures, wherein the highest expected performance value is an aggregate performance value that is generated based on: (i) an aggregation of the plurality of different performance measures in a manner that identifies the optimal sample size according to the aggregation and (ii) the probability for each of the one or more performance measure values based on the CDF of the sampling distribution and the one or more parameters of the adjustment grid. CLAIM 2: The system of claim 1, wherein determining the probability for each performance measure value comprises: determining a curve indicating the sampling distribution for the one or more sample sizes; and determining a probability of a contract achieving each performance measure value by determining an area under the curve of a probability density function of the sampling distribution, wherein the curve is partitioned by at least one of a cutpoints, a significance threshold, or a reliability assignment according to the adjustment grid. CLAIM 3: The system of claim 1, wherein determining the expected performance measure value comprises: determining a mean across each expected performance measure, wherein the expected performance measure is a product of the probability for each performance measure value and the performance measure value. CLAIM 4: The system of claim 1, wherein the user interface includes a visual representation of the expected performance measure value determined for each of the one or more sample sizes, as a function of the one or more sample sizes at a contract level and a measure level. CLAIM 5: The system of claim 1, wherein the significance threshold is an output of a two sided t-test that compares a mean of a contract to an average of one or more other contracts. CLAIM 6: The system of claim 1, wherein the reliability coefficient compares a variance of a contract to variances between one or more other contracts. CLAIM 7: The system of claim 2, further comprising: determining a standard error for each of the one or more sample sizes, wherein the standard error measures variation in contract values, and wherein the standard error is based on the response scores and a user size. CLAIM 8: The system of claim 7, wherein each contract value is a mean of the response scores to the queries in the data collection objects of the user size is a product of a response rate and a sample size. CLAIM 10: The system of claim 1, wherein the data collection objects include a Consumer Assessment of Health Care Providers and Systems (CAHPS). CLAIM 11: A computer-implemented method comprising: receiving, by one or more processors, electronic response data from data collection objects, wherein the electronic response data includes electronic data representing response scores to queries in the data collection objects; determining, by the one or more processors, a sampling distribution for each of one or more sample sizes based on the response scores; determining, by the one or more processors, a significance threshold and a reliability coefficient for each of the one or more sample sizes, the reliability coefficient being output in an electronic format suitable for comparison with an adjustment grid, the adjustment grid being representable as columns and rows that indicate quantification of reliability and/or population distributions; determining, by the one or more processors, a probability for each of one or more performance measure values for the one or more sample sizes based on a cumulative distributive function (CDF) of the sampling distribution and one or more parameters of an adjustment grid; determining, by the one or more processors, an expected performance measure value for each of the one or more sample sizes based on the probability determined for each of the one or more performance measure values; and causing, by the one or more processors, a presentation of the expected performance measure value calculated for each of the one or more sample sizes in a user interface of a device, wherein at least one of the one or more sample sizes with a highest expected performance measure value is recommended by automatically displaying the at least one of the one or more sample sizes as an optimal sample size, the highest expected performance measure value being determined based on a plurality of different performance measures; receiving an update to at least one of the plurality of different performance measures; determining a change in the highest expected performance measure value based on the update to the at least one of the plurality of different performance measures; and causing the user interface to update in accordance with the change in the highest expected performance measure value and display one or more of the updated plurality of different performance measures, the user interface including interactive areas that present the plurality of different performance measures, wherein the highest expected performance value is an aggregate performance value that is generated based on: (i) an aggregation of the plurality of different performance measures in a manner that identifies the optimal sample size according to the aggregation and (ii) the probability for each of the one or more performance values based on the CDF of the sampling distribution and the one or more parameters of the adjustment grid. CLAIM 12: The computer-implemented method of claim 11, wherein determining the probability for each performance measure value comprises: determining, by the one or more processors, a curve indicating the sampling distribution for the one or more sample sizes; and determining, by the one or more processors, a probability of a contract achieving each performance measure value by determining an area under the curve of a probability density function of the sampling distribution, wherein the curve is partitioned by at least one of a cutpoints, a significance threshold, or a reliability assignment according to the adjustment grid. CLAIM 13: The computer-implemented method of claim 11, wherein determining the expected performance measure value comprises: determining, by the one or more processors, a mean across each expected performance measure, wherein the expected measure performance measure is a product of the probability for each performance measure value and the performance measure value. CLAIM 14: The computer-implemented method of claim 11, wherein the user interface includes a visual representation of the expected performance measure value determined for each of the one or more sample sizes, as a function of the one or more sample sizes at a contract level and a measure level. CLAIM 15: The computer-implemented method of claim 11, wherein the significance threshold is an output of a two sided t-test that compares a mean of a contract to an average of one or more other contracts. CLAIM 18: One or more non-transitory computer readable media storing processor executable instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising: receiving electronic response data from data collection objects, wherein the electronic response data includes electronic data representing response scores to queries in the data collection objects; determining a sampling distribution for each of one or more sample sizes based on the response scores; determining a significance threshold and a reliability coefficient for each of the one or more sample sizes, the reliability coefficient being output in an electronic form suitable for comparison with an adjustment grid, the adjustment grid being representable as columns and rows that indicate quantification of reliability and/or population distributions; determining a probability for each of one or more performance measure values for the one or more sample sizes based on a cumulative distributive function (CDF) of the sampling distribution and one or more parameters of the adjustment grid; determining an expected performance measure value for each of the one or more sample sizes based on the probability determined for each of the one or more performance measure values; and causing a presentation of the expected performance measure value determined for each of the one or more sample sizes in a user interface of a device, wherein at least one of the one or more sample sizes with a highest expected performance measure value is recommended by automatically displaying the at least one of the one or more sample sizes as an optimal sample size, the highest expected performance measure value being determined based on a plurality of different performance measures; receiving an update to at least one of the plurality of different performance measures; determining a change in the highest expected performance measure value based on the update to the at least one of the plurality of different performance measures; and causing the user interface to update in accordance with the change in the highest expected performance measure value and display one or more of the updated plurality of different performance measures, the user interface including interactive areas that present the plurality of different performance measures, wherein the highest expected performance value is an aggregate performance value that is generated based on: (i) an aggregation of the plurality of different performance measures in a manner that identifies the optimal sample size according to the aggregation and (ii) the probability for each of the one or more performance values based on the CDF of the sampling distribution and the one or more parameters of the adjustment grid. CLAIM 19: The one or more non-transitory computer readable media of claim 18, wherein determining the probability for each performance measure value comprises: determining a curve indicating the sampling distribution for the one or more sample sizes; and determining a probability of a contract achieving each performance measure value by determining an area under the curve of a probability density function of the sampling distribution, wherein the curve is partitioned by at least one of a cutpoints, a significance threshold, or a reliability assignment according to the adjustment grid. CLAIM 20: The one or more non-transitory computer readable media of claim 18, wherein determining the expected performance measure value comprises: determining a mean across each expected performance measure, wherein the expected performance measure is a product of the probability for each performance measure value and the performance measure value. CLAIM 21: The system of claim 1, wherein: the electronic response data that is received from the data collection objects was generated by interactions with a first device after having received input from a first user; and The device is configured to present the expected performance measure value on a second device for a second user. CLAIM 23: The system of claim 1, wherein the aggregation of the expected performance measure values is normalized such that a plurality of expected performance measure values are represented with a unified scoring scale used for presentation of the highest expected performance measure value, the aggregation being performed with normalized expected performance measure values such that the expected performance measure value is a normalized value. * The limitations that are in bold are considered “additional elements” that are further analyzed below in subsequent steps of the 101 analysis. The limitations that are not in bold are abstract and/or can be reasonably and practically performed in the human mind and/or with pen paper. Furthermore, the claims recite determining a significance threshold, a reliability coefficient for each of the one or more sample sizes, a probability for each of the one or more performance measure values, an expected performance measure value, an optimal sample size, etc. which are mathematical calculations. As such, the claims may also be properly categorized under the abstract category of mathematical concepts. In regards to (2A2), the claims do not recite additional elements that integrate the abstract idea into a practical application. The additional elements in the claims (i.e., * identified above in bold) do not integrate the abstract idea into a practical application because the additional elements merely add insignificant extra-solution activity to the abstract idea; merely link the use of the judicial exception to a particular technological environment or field of use; and/or simply append technologies and functions, specified at a high level of generality, to the abstract idea (i.e., the additional elements do not amount to more than a recitation of the words “apply it” (or an equivalent) or are more than mere instructions to implement an abstract idea or other exception on a computer). Here, the additional elements (e.g., one or more processors, non-transitory computer readable medium, interface, computer, etc.) are recited at a high-level of generality such that it amounts to no more than mere instructions to apply the abstract idea using generic computer technologies. Moreover, the claims recite “cause the one or more processors to perform operations”, etc. devoid of any meaningful technological improvement details and thus, further evidence the additional elements are merely being used to leverage generic technologies to automate what otherwise could be done manually. Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Furthermore, the additional elements do not recite improvements to the functioning of a computer, or to any other technology or technical field—the additional elements merely recite general purpose computer technology; the additional elements do not recite applying or using a judicial exception to effect a particular treatment or prophylaxis for disease or medical condition—there is no actual administration of a particular treatment; the additional elements do not recite applying the judicial exception with, or by use of, a particular machine—the additional elements merely recite general purpose computer technology; the additional elements do not recite limitations effecting a transformation or reduction of a particular article to a different state or thing—the additional elements do not recite transformation such as a rubber mold process; the additional elements do not recite applying or using the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment—the additional elements merely leverage general purpose computer technology to link the abstract idea to a technological environment. In regards to (2B), the claims, individually, as a whole and in combination with one another, do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements or combination of elements in the claims, other than the abstract idea per se, amount to no more than a recitation of (A) a generic computer structure(s) that serves to perform computer functions that serve to merely link the abstract idea to a particular technological environment (i.e., computers); and/or (B) functions that are well-understood, routine, and conventional activities previously known to the pertinent industry. Here, as discussed above with respect to integration of the abstract idea into a practical application, the additional elements amount to no more than mere instructions to apply the exception using generic computer technologies. Mere instructions to apply an exception using generic computer technologies cannot provide an inventive concept. Moreover, paragraphs [0027]-[0030] of applicant's specification (US 2025/0125023) recites that the system/method is implemented using equipment such as hand-held computers, desktop computers, laptop computers, wireless communication devices, cell phones, smartphones, mobile communications devices, a Personal Communication System (PCS) device, tablets, server computers, gateway computers, or any combination thereof, including the accessories and peripherals of these devices, or any combination thereof which are well-known general purpose or generic-type computers and/or technologies. The use of generic computer components recited at a high level of generality to process information through an unspecified processor/computer does not impose any meaningful limit on the computer implementation of the abstract idea. Thus, taken alone, the additional elements do not amount to significantly more than the above-identified judicial exception (the abstract idea). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. Furthermore, the additional elements are merely well-known general purpose computers, components and/or technologies that receive, transmit, store, display, generate and otherwise process information which are akin to functions that courts consider well-understood, routine, and conventional activities previously known to the pertinent industry, such as, performing repetitive calculations; receiving or transmitting data over a network; electronic recordkeeping; retrieving and storing information in memory; and sorting information (See, for example, MPEP § 2106). Therefore, the claims are not patent-eligible under 35 U.S.C. § 101. Response to Arguments 5. Applicant's arguments filed 2/19/2026 have been fully considered but they are not persuasive. Applicant’s arguments will be addressed hereinbelow in the order in which they appear in the response filed 2/19/2026. 5.1. Applicant argues, on pages 13-17 of the response, that the claims are patent-eligible subject matter for the following reasons: (1) the claims are not directed to a method of organizing human activity; (2) the USPTO’s recently-issued December Memo supports patentability of amended claim 1; and (3) Examples 37 and 42 of the USPTO’s 2019 revised subject matter eligibility Guidance (2019 PEG) support patentability of claim 1. In regard to (1), it is respectfully reiterated that that the claimed invention is directed to determining an optimal sample size for a contact to calculate performance measure values or star ratings of healthcare providers based on patient surveys to thereby improve healthcare. These are human activities and/or interactions and thus, properly categorized as certain methods of organizing human activity which encompasses both certain activity of a single person, certain activity that involves multiple people, and certain activity between a person and a computer. However, assuming arguendo that the claims cannot be properly categorized under certain methods of organizing human activity, the claims are nonetheless abstract because the claims recite and are directed to mental concepts. For example, it is submitted that receiving response data, determining sampling distribution, determining a significance threshold and a reliability coefficient, determining a probability for each performance measure value, determining an expected performance measure value, presenting the expected performance measure values and recommended optimal sample sizes, and the like are mental concepts because they can be performed manually by a human in the human mind and/or with pen and paper using observation, evaluation, judgment and/or opinion. While the claims recite electronic data, an electronic form, automatically, a user interface including interactive areas, etc., these features do not render the claims non-abstract and are considered additional elements analyzed in the subsequent prongs of the 101 analysis. In regard to (2), it is noted that the December Memo highlighted that the Federal Circuit in Ex Parte Desjardins held that the eligibility determination should turn on whether “the claims are directed to an improvement to computer functionality versus being directed to an abstract idea” and that the claimed invention was a method of training a machine learning model on a series of tasks whereby parameter values were adjusted to optimize performance of the machine learning model which was deemed to be an improvement to computer functionality and more specifically, machine learning models. It is respectfully submitted that the pending claims are not directed to an improvement to computer functionality per se. Rather, it is submitted that the alleged improvements of applicant's claims pertain to the abstract idea itself, rather than improvements to the technology (i.e., computer technology or computer field). For example, the claims recite generic computer technology used to automate the steps of receiving response determining sampling distributions, determining significance thresholds and reliability coefficients for the sample sizes, determining probabilities for each of the performance measure values, etc. to thereby generate and display the aggregate performance value that identifies the optimal sample size for a contract. In other words, the focus of applicant’s claims is not on an improvement in computers as tools, but on certain abstract ideas that use computers as tools. In regard to (3), it is noted that the 101 patent-eligible claims of Example 37 are directed to a relocation of icons on a graphical user interface (GUI) that improves GUI technology by making the GUI more functionally user friendly. As such, the deciding factor in determining 101 patent-eligibility was based on a technological improvement rather than automation. Moreover, none of the limitations of the 101 patent-eligible claims can be reasonably and practically performed in the human mind and/or with pen and paper. For example, determining the amount of use of each icon using a processor that tracks how much memory has been allocated to each application associated with each icon over a predetermined period of time cannot be performed in the human mind and/or with pen and paper. Unlike Example 37, the pending claims are not directed to GUI technology and thus, Example 37 is not analogous art. Moreover, unlike the 101 patent-eligible claims of Example 37, the pending claims are not directed to any technological improvement per se, such as an improvement in computers as tools, as set forth above. It is also respectfully submitted that the pending claims are not analogous to the 101 patent-eligible claims of Example 42. In Example 42, a technical problem is clearly disclosed—electronic medical records (EMRs) cannot be consolidated on a computer server due to format inconsistencies (i.e., records from different sources are input using different non-standard formats). To resolve this technical problem, the additional elements of 101 patent-eligible Claim 1 of Example 42 recites a specific technological improvement over prior art systems by enabling users to share EMR information in real-time in a standardized format regardless of the format in which the information was input by a user. In short, Example 42 clearly provides a technological solution to a technological problem thereby integrating the abstract idea into a practical application. In contrast, the pending claims do not recite any technological problems with inputs in different formats, consolidation of information, etc. As such, Example 42 is not analogous to the pending claims. Furthermore, the pending claims are not directed to a technological improvement per se and do not recite additional elements that integrate the abstract idea into a practical application, as set forth above. As such, it is respectfully submitted that the pending claims are directed to an abstract idea, the claims do not recite any additional elements that integrate the abstract idea into a practical application, the additional elements do not amount to significantly more than the abstract idea itself, and therefore, the claims are not patent-eligible subject matter under 35 U.S.C. § 101. Conclusion 6. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michael Tomaszewski whose telephone number is (313)446-4863. The examiner can normally be reached M-F 5:30 am - 2:30 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Peter H Choi can be reached at (469) 295-9171. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL TOMASZEWSKI/Primary Examiner, Art Unit 3681
Read full office action

Prosecution Timeline

Oct 13, 2023
Application Filed
Jul 23, 2025
Non-Final Rejection — §101
Sep 29, 2025
Applicant Interview (Telephonic)
Sep 29, 2025
Examiner Interview Summary
Oct 27, 2025
Response Filed
Nov 15, 2025
Final Rejection — §101
Dec 09, 2025
Interview Requested
Feb 19, 2026
Request for Continued Examination
Mar 09, 2026
Response after Non-Final Action
Mar 21, 2026
Non-Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592900
METHOD AND APPARATUS FOR MESSAGING SERVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12567490
DEEP-LEARNING-BASED MEDICAL IMAGE INTERPRETATION SYSTEM FOR ANIMALS
2y 5m to grant Granted Mar 03, 2026
Patent 12561751
DIGITAL COPYRIGHT CREATION MODULE FOR DIGITAL CONTENT CREATED USING GENERATIVE AI, AND DIGITAL CONTENT DISTRIBUTION APPARATUS AND METHOD USING THE SAME
2y 5m to grant Granted Feb 24, 2026
Patent 12548682
SYSTEM AND METHOD FOR OUTCOME TRACKING AND ANALYSIS
2y 5m to grant Granted Feb 10, 2026
Patent 12525329
PRECISION-BASED IMMUNO-MOLECULAR AUGMENTATION (PBIMA) COMPUTERIZED SYSTEM, METHOD, AND THERAPEUTIC VACCINE
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
47%
Grant Probability
70%
With Interview (+23.1%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 572 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month