Prosecution Insights
Last updated: April 19, 2026
Application No. 16/889,779

SYSTEMS AND METHODS OF CLINICAL TRIAL EVALUATION

Final Rejection §101
Filed
Jun 01, 2020
Examiner
COBANOGLU, DILEK B
Art Unit
3687
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Tempus AI Inc.
OA Round
8 (Final)
33%
Grant Probability
At Risk
9-10
OA Rounds
4y 9m
To Grant
61%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
163 granted / 492 resolved
-18.9% vs TC avg
Strong +28% interview lift
Without
With
+27.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 9m
Avg Prosecution
57 currently pending
Career history
549
Total Applications
across all art units

Statute-Specific Performance

§101
35.3%
-4.7% vs TC avg
§103
27.2%
-12.8% vs TC avg
§102
21.1%
-18.9% vs TC avg
§112
13.6%
-26.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 492 resolved cases

Office Action

§101
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This communication is in response to the amendment received on 12/11/2025. Claims 1-3, 5-9, 11-12, 25-29 and 33-34 remain pending in this application. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-3, 5-9, 11-12, 25-29 and 33-34 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Claims 1-3, 5-6 and 33-34 are drawn to a method which is within the four statutory categories (i.e. process). Claims 7-12 are drawn to a system which is within the four statutory categories (i.e. machine). Claims 25-29 are drawn to a non-transitory medium which is within the four statutory categories (i.e. manufacture). Step 2A, Prong 1: Claims 1, 7 and claim 25 have been amended to recite: “receiving, via one or more processors, unstructured text-based criteria for the clinical trial, including inclusion criteria and exclusion criteria, at least one of the inclusion or exclusion criteria, from a plurality of clinical trial sources, including a molecular marker; aggregating, via the one or more processors, the unstructured text-based criteria for the clinical trial from the plurality of clinical trial sources; transforming, via the one or more processors, the unstructured text-based criteria for the clinical trial into structured text-based criteria by applying natural language processing to structure the unstructured text-based criteria; generating, via the one or more processors, predictive text by applying the natural language processing to at least a portion of the structured text-based criteria to populate one or more pre-defined data fields relevant to the inclusion or exclusion criteria of the clinical trial with at least one of the pre-defined data fields containing molecular marker information; retrieving, via the one or more processors, stored individual feature data for the individual from a data store in a common, structured format, the individual feature data derived from a plurality of different sources, each source having its own schema for structuring features within the source, the individual feature data further including a molecular marker of the individual; determining, via the one or more processors, whether the individual is matched to the clinical trial, including: training a plurality of machine learning models on a training data set comprising patient information, clinical trial information including inclusion criteria and exclusion criteria, and line-by-line classification results indicating, for each inclusion criterion and each exclusion criterion in the training data set, whether that criterion was met, wherein each machine learning model of the plurality of machine learning models is trained to evaluate a respective inclusion criterion or exclusion criterion by receiving at least one feature from the patient information and outputting an indication of whether that criterion is met, applying, via a data-criteria concept matching module comprising the plurality of trained machine learning models, the plurality of trained machine learning models to the stored individual feature data and the populated pre-defined data fields to generate, for each criterion of the clinical trial, a classification indicating whether that criterion is met, classifying each criterion as either a static criterion or a temporal criterion based on the classification of the criterion, determining that all static criteria are met, and after determining that all static criteria are met, determining whether at least one temporal criterion is met; and generating, via the one or more processors, a report for a provider, the report indicating whether the individual is matched to the clinical trial”. The limitations of “aggregating, via the one or more processors, the unstructured text-based criteria for the clinical trial from the plurality of clinical trial sources; transforming, via the one or more processors, the unstructured text-based criteria for the clinical trial into structured text-based criteria by applying natural language processing to structure the unstructured text-based criteria; generating, via the one or more processors, predictive text by applying the natural language processing to at least a portion of the structured text-based criteria to populate one or more pre-defined data fields relevant to the inclusion or exclusion criteria of the clinical trial with at least one of the pre-defined data fields containing molecular marker information” and “determining, via the one or more processors, whether the individual is matched to the clinical trial, including: training a plurality of machine learning models on a training data set comprising patient information, clinical trial information including inclusion criteria and exclusion criteria, and line-by-line classification results indicating, for each inclusion criterion and each exclusion criterion in the training data set, whether that criterion was met, wherein each machine learning model of the plurality of machine learning models is trained to evaluate a respective inclusion criterion or exclusion criterion by receiving at least one feature from the patient information and outputting an indication of whether that criterion is met, applying, via a data-criteria concept matching module comprising the plurality of trained machine learning models, the plurality of trained machine learning models to the stored individual feature data and the populated pre-defined data fields to generate, for each criterion of the clinical trial, a classification indicating whether that criterion is met, classifying each criterion as either a static criterion or a temporal criterion based on the classification of the criterion, determining that all static criteria are met, and after determining that all static criteria are met, determining whether at least one temporal criterion is met” correspond to an abstract idea of mathematical concepts, since “applying natural language processing”, “applying a data criteria matching module” and “training a plurality of machine learning models…and line-by-line classification” are mathematical calculations. The steps of “determining…whether the individual is matched to the clinical trial…” also correspond to an abstract idea of “a mental process”, which may be practically performed in the human mind using observation, evaluation, judgment and opinion (such as, user manually performs the evaluation in mind or using pen and paper). These limitations are performed by “one or more processors”, which are recited at a high level of generality and amounts to nor more than mere instructions to apply the exception using generic processor (computer). The claims recite and the current specification describes the computer and the processor as generic devices (“…such as a laptop computer, a tablet, a smart phone, etc…” in [00119]). After considering all claim elements, both individually and in combination and in ordered combination, it has been determined that the claims do not amount to significantly more than the abstract idea itself. Dependent claims also correspond to an abstract idea of mental process, such as claims 5, 11 and 28 recite “determining that the individual has not received a treatment related to the molecular marker of the individual; and determining that the individual is eligible for at least one candidate clinical trial in response to determining that the individual has not received the treatment”, and newly added claims 33 and 34 recite “determining that the at least one temporal criterion is met” and these features are directed to, under their broadest reasonable interpretation, cover performance of the limitations in the mind but for the recitation of generic computer components, hence they fall within the “Mental Processes” grouping of abstract ideas. Dependent claims 2-3, 5-6, 8-9, 11-12, 26-29 and 33-34 are ultimately dependent from Claims 1, 7, 25 and include all the limitations of Claim 1, 7, 25. Therefore, claims 2-3, 5-6, 8-9, 11-12, 26-29 and 33-34 recite the same abstract idea. Claims 2-3, 5-6, 8-9, 11-12, 26-29 and 33-34 describe a further limitation regarding the basis for determining patient’s eligibility for a clinical trial. These are all just further describing the abstract idea recited in claim 1, 7, 25 without adding significantly more. Step 2A, Prong 2: This judicial exception is not integrated into a practical application. In particular, claims recite the additional elements of “one or more processors”, “at least one memory” and using one or more processors and at least one memory to perform “generating”, “determining”, “identifying”, “populating” steps. These additional elements correspond to hardware or software elements, these limitations are not enough to qualify as “practical application” being recited in the claims along with the abstract idea since these elements are merely invoked as a tool to apply instructions of the abstract idea in a particular technological environment, and mere instructions to apply/implement/automate an abstract idea in a particular technological environment and merely limiting the use of an abstract idea to a particular field or technological environment do not provide practical application for an abstract idea (MPEP 2106.05(f) & (h)). In particular, the steps of “aggregating”, “generating” and “determining” are recited as being performed by a computer, which is recited at a high level of generality. These steps amount to no more than mere instructions to apply the exception using generic computer. Claims 1, 7 and 25 have been amended to recite: “determining, via the one or more processors, whether the individual is matched to the clinical trial, including: training a plurality of machine learning models on a training data set comprising patient information, clinical trial information including inclusion criteria and exclusion criteria, and line-by-line classification results indicating, for each inclusion criterion and each exclusion criterion in the training data set, whether that criterion was met, wherein each machine learning model of the plurality of machine learning models is trained to evaluate a respective inclusion criterion or exclusion criterion by receiving at least one feature from the patient information and outputting an indication of whether that criterion is met, applying, via a data-criteria concept matching module comprising the plurality of trained machine learning models, the plurality of trained machine learning models to the stored individual feature data and the populated pre-defined data fields to generate, for each criterion of the clinical trial, a classification indicating whether that criterion is met, classifying each criterion as either a static criterion or a temporal criterion based on the classification of the criterion, determining that all static criteria are met, and after determining that all static criteria are met, determining whether at least one temporal criterion is met;”. The trained module described in the current specification in [0266]: “In some embodiments, the AI classification system 3280 can include at least one trained model that can receive inclusion criteria and/or exclusion criteria in the inclusion and exclusion criteria module 3272 and features in the patient data store 3202, and output at least one indication of whether or not at least one criteria is met or not met. In some embodiments, the trained model can be a neural network or other appropriate machine learning model trained on a training data set. For a data-criteria concept mapping classifier, an exemplary training data set may include patient information (e.g., features that may be included in the patient data store 3202), clinical trial information including inclusion and exclusion criteria (e.g., criteria that may be included in the inclusion and exclusion criteria module 3272), and resulting line-by-line classification results for whether the inclusion or exclusion criteria were met (e.g., ground truths).”. Therefore, the “applying, via a data-criteria concept matching module, a plurality of trained machine learning models” limitation amount to no more than mere instructions to apply the exception using a generic computer. The claims recite “generating, via one or more processors, predictive text by applying natural language processing to at least a portion of the text-based criteria to populate one or more pre-defined data fields…” and this limitation corresponds to mere instructions to apply an exception and does not integrate a judicial exception into a practical application (MPEP 2106.05(f)). Claims also recite other additional limitations beyond abstract idea, including functions such as “receiving/retrieving data from/to a database/memory”, “generating a report” are insignificant extra-solution activities (see MPEP 2106.05 (g)), which do not provide a practical application for the abstract idea. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea. Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using a processor to perform both “applying natural language processing to the unstructured text-based criteria…,”, “applying, via a data-criteria concept matching module, training a plurality of trained machine learning models, each trained machine learning model associated with a respective inclusion or exclusion criterion of the clinical trial,…” and “training line-byline classification results…” steps amount to no more than mere instructions to apply the exception using a generic computer component. In particular, the newly added feature of “training a plurality of machine learning models on a training data set comprising patient information, clinical trial information including inclusion criteria and exclusion criteria, and line-by-line classification results indicating, for each inclusion criterion and each exclusion criterion in the training data set, whether that criterion was met, wherein each machine learning model of the plurality of machine learning models is trained to evaluate a respective inclusion criterion or exclusion criterion by receiving at least one feature from the patient information and outputting an indication of whether that criterion is met” or using line-by-line classifier to determine a score and then using another machine learning technique for determining patient eligibility for a clinical trial is found to be a well-understood, routine and conventional activity, as evidenced by the article “Automated classification of eligibility criteria in clinical trials to facilitate patient-trial matching for specific patient populations”, by Kevin Zhang and Dina Demner-Fushman (hereinafter Zhang), published on Journal of the American Medical Informatics Association in 2017. Zhang discloses “To develop automated classification methods for eligibility criteria in ClinicalTrials.gov to facilitate patient-trial matching…” in abstract, “Using the patterns we observed during annotation, we formulated rules and regular expressions to capture the boundaries between inclusion and exclusion criteria as well as recognize specific phrases and clauses in the study title, conditions studied, and eligibility criteria… These 2 components were used to build a line-by-line classifier that assigned a total score for each study record…” on pages 782-783, under the subtitle “Rule-based classification using regular expressions”, and “Next, we constructed a natural language processing pipeline to train and test a classifier using supervised ML techniques to see if we could improve upon our baseline regex-based method. For each study, we extracted the eligibility criteria and performed some light automated preprocessing for text cleanup, including adding additional line breaks where necessary and subsequently removing punctuation characters…” on page 783, under the subtitle “Classification using machine learning”. Hence, the claim limitations correspond to mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are not patent eligible. Therefore, claims 1-3, 5-9, 11-12, 25-29 and 33-34 are nonetheless rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. Response to Arguments Applicant's arguments filed 12/11/2025 have been fully considered but they are not persuasive. Applicant’s arguments will be addressed below in the order in which they appear. Argument 1: Applicant argues that claims integrate the asserted abstract ides into a practical application by providing an improvement to machine learning training technology, the line-by-line classification results indicating, for each inclusion and each exclusion criterion in the training data set, this training methodology improves upon conventional approaches. In response, Examiner submits that, as indicated in the rejection above, the newly added feature of “training a plurality of machine learning models on a training data set comprising patient information, clinical trial information including inclusion criteria and exclusion criteria, and line-by-line classification results indicating, for each inclusion criterion and each exclusion criterion in the training data set, whether that criterion was met, wherein each machine learning model of the plurality of machine learning models is trained to evaluate a respective inclusion criterion or exclusion criterion by receiving at least one feature from the patient information and outputting an indication of whether that criterion is met” or using line-by-line classifier to determine a score and then using another machine learning technique for determining patient eligibility for a clinical trial is found to be a well-understood, routine and conventional activity, as evidenced by the article “Automated classification of eligibility criteria in clinical trials to facilitate patient-trial matching for specific patient populations”, by Kevin Zhang and Dina Demner-Fushman (hereinafter Zhang), published on Journal of the American Medical Informatics Association in 2017. Zhang discloses “To develop automated classification methods for eligibility criteria in ClinicalTrials.gov to facilitate patient-trial matching…” in abstract, “Using the patterns we observed during annotation, we formulated rules and regular expressions to capture the boundaries between inclusion and exclusion criteria as well as recognize specific phrases and clauses in the study title, conditions studied, and eligibility criteria…These 2 components were used to build a line-by-line classifier that assigned a total score for each study record…” on pages 782-783, under subtitle “Rule-based classification using regular expressions”, and “Next, we constructed a natural language processing pipeline to train and test a classifier using supervised ML techniques to see if we could improve upon our baseline regex-based method. For each study, we extracted the eligibility criteria and performed some light automated preprocessing for text cleanup, including adding additional line breaks where necessary and subsequently removing punctuation characters…” on page 783, under subtitle “Classification using machine learning”. Hence, the claim limitations correspond to mere instructions to apply the exception using a generic computer component and they do not provide an improvement to the technology. Argument 2: Applicant argues that the present claims improve how machine learning models operate, similar to the claims of Desjardins, by using line-by-line classification results to train per-criterion models. In response, Examiner submits that, as indicated in the section above, this feature is a well-understood, routine and conventional activity in the field, as evidenced by Zhang article. Applicant argues that the claimed per-criterion architecture is not conventional, the data-criteria concept matching module include a number of trained models, each trained model being associated with a specific inclusion criteria or exclusion criteria. In response, Examiner submits that the current specification recites data-criteria concept mapping includes “classification codes”, “AI classification” and “dictionary (concept-map) classification” (Fig. 32, item 3274 and [0243]-[0244]. These mapping items described as, “the classification code system 3276 can assign one or more predetermined classification codes to each feature in the patient data store 3202 and/or the corresponding inclusion/exclusion criteria in the inclusion and exclusion criteria module 3272” (patient information) in [0245], “the flow 300 can include assigning each feature in the patient data store 3202 to appropriate corresponding inclusion/exclusion criteria in the inclusion and exclusion criteria module 3272 using the dictionary based classification system 3278.” (clinical/oncological dictionary) in [0259] and “…the AI classification system 3280 can include at least one trained model that can receive inclusion criteria and/or exclusion criteria in the inclusion and exclusion criteria module 3272 and features in the patient data store 3202, and output at least one indication of whether or not at least one criteria is met or not met.” in [0266]. Therefore, the system classify patient information, dictionary information and make a determination on whether the criteria is met. Combining these classifications are not unconventional, since it’s known in the field that patient information and rules/dictionary information should be included for determining whether the patient is eligible for a certain clinical trial. Therefore, the arguments are not persuasive and claims are rejected under 35 U.S.C. §101 as being directed to non-statutory subject matter. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DILEK B COBANOGLU whose telephone number is (571)272-8295. The examiner can normally be reached 8:30-5:00 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Obeid Mamon can be reached at (571) 270-1813. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DILEK B COBANOGLU/ Primary Examiner, Art Unit 3687
Read full office action

Prosecution Timeline

Jun 01, 2020
Application Filed
Aug 29, 2022
Non-Final Rejection — §101
Feb 08, 2023
Response Filed
May 03, 2023
Final Rejection — §101
Jun 01, 2023
Interview Requested
Jun 21, 2023
Examiner Interview Summary
Jun 21, 2023
Applicant Interview (Telephonic)
Aug 08, 2023
Request for Continued Examination
Aug 09, 2023
Response after Non-Final Action
Nov 01, 2023
Non-Final Rejection — §101
Feb 06, 2024
Response Filed
Feb 09, 2024
Interview Requested
Feb 20, 2024
Applicant Interview (Telephonic)
Feb 20, 2024
Examiner Interview Summary
Apr 25, 2024
Final Rejection — §101
Jun 20, 2024
Applicant Interview (Telephonic)
Jun 20, 2024
Examiner Interview Summary
Jul 30, 2024
Request for Continued Examination
Jul 31, 2024
Response after Non-Final Action
Aug 09, 2024
Non-Final Rejection — §101
Oct 24, 2024
Interview Requested
Oct 31, 2024
Examiner Interview Summary
Oct 31, 2024
Applicant Interview (Telephonic)
Feb 05, 2025
Response Filed
May 07, 2025
Final Rejection — §101
Jul 23, 2025
Applicant Interview (Telephonic)
Jul 23, 2025
Examiner Interview Summary
Jul 31, 2025
Request for Continued Examination
Aug 01, 2025
Response after Non-Final Action
Sep 10, 2025
Non-Final Rejection — §101
Dec 02, 2025
Applicant Interview (Telephonic)
Dec 02, 2025
Examiner Interview Summary
Dec 11, 2025
Response Filed
Mar 11, 2026
Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12574434
METHOD OF HUB COMMUNICATION, PROCESSING, DISPLAY, AND CLOUD ANALYTICS
2y 5m to grant Granted Mar 10, 2026
Patent 12500948
METHOD OF HUB COMMUNICATION, PROCESSING, DISPLAY, AND CLOUD ANALYTICS
2y 5m to grant Granted Dec 16, 2025
Patent 12482562
SYSTEMS AND METHODS FOR AND DISPLAYING PATIENT DATA
2y 5m to grant Granted Nov 25, 2025
Patent 12380972
DATA COMMAND CENTER VISUAL DISPLAY SYSTEM
2y 5m to grant Granted Aug 05, 2025
Patent 12334223
LEARNING APPARATUS, MENTAL STATE SEQUENCE PREDICTION APPARATUS, LEARNING METHOD, MENTAL STATE SEQUENCE PREDICTION METHOD AND PROGRAM
2y 5m to grant Granted Jun 17, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

9-10
Expected OA Rounds
33%
Grant Probability
61%
With Interview (+27.9%)
4y 9m
Median Time to Grant
High
PTA Risk
Based on 492 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month