Prosecution Insights
Last updated: April 19, 2026
Application No. 18/689,855

Selecting Clinical Trial Sites Based on Multiple Target Variables Using Machine Learning

Non-Final OA §101
Filed
Mar 06, 2024
Examiner
MPAMUGO, CHINYERE
Art Unit
3685
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Janssen Research & Development LLC
OA Round
3 (Non-Final)
27%
Grant Probability
At Risk
3-4
OA Rounds
4y 0m
To Grant
54%
With Interview

Examiner Intelligence

Grants only 27% of cases
27%
Career Allow Rate
88 granted / 328 resolved
-25.2% vs TC avg
Strong +27% interview lift
Without
With
+27.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
42 currently pending
Career history
370
Total Applications
across all art units

Statute-Specific Performance

§101
43.0%
+3.0% vs TC avg
§103
33.8%
-6.2% vs TC avg
§102
13.9%
-26.1% vs TC avg
§112
7.4%
-32.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 328 resolved cases

Office Action

§101
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on March 5, 2026 has been entered. Status of Claims In the response filed March 5, 2026, Applicant amended claims 1, 4, 9, 10, 19, and 20. Claims 1-20 are pending in the current application. Response to Arguments Applicant's arguments with respect to the rejection under 35 U.S.C. 101 have been fully considered but they are not persuasive. Applicant asserts that the rejections should be withdrawn in view of Desjardins because the claims improve predictive power in a machine learning processor, in addition to other related technical improvements such as efficiently managing large datasets from disparate data sources and employing various techniques to reduce computation complexity. Examiner Respectfully disagree. In Desjardins, while analyzing under Step 2A, Prong Two, the ARP determined that the specification identified improvements as to how the machine learning model itself operates, including training a machine learning model to learn new tasks while protecting knowledge about previous tasks to overcome the problem of “catastrophic forgetting” encountered in continual learning systems. Importantly, the ARP evaluated the claims as a whole in discerning at least the limitation “adjust the first values of the plurality of parameters to optimize performance of the machine learning model on the second machine learning task while protecting performance of the machine learning model on the first machine learning task. That is, the claimed invention of Desjardins trained a machine learning model to learn new tasks and overcoming the problem encountered in conventional continual learning systems. Unlike Desjardins, the claims encompass a user obtaining trial protocols, generating predicted site enrollment and likelihood data via calculations, and ranking and selecting clinical trial sites. Moreover, the claims recited conventional machine learning models without specific improvements to the technology itself. In Recentive Analytics, the court noted that "iterative training," a claimed feature, was inherent to all machine learning models and thus did not confer eligibility. Additionally, applying machine learning to event scheduling, an activity predating computers, did not transform the abstract idea into a patent-eligible invention. In this case, simply applying generic machine learning techniques (i.e., regression and classification-based models) to selecting clinical trial sites without improving the underlying technology is insufficient for patent eligibility. Analysis under step 2B follows the same reasoning as to why the claims are not a specific improvement to technology. The rejection is maintained. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claims are not directed to patent eligible subject matter. Claims 1-20 do fall within at least one of the four categories of patent eligible subject matter because the claims recite a machine (i.e., non-transitory computer-readable storage medium and system) and process (i.e., a method). Although claims 1-20 fall under at least one of the four statutory categories, it should be determined whether the claim wholly embraces a judicially recognized exception, which includes laws of nature, physical phenomena, and abstract ideas, or is it a particular practical application of a judicial exception (See MPEP 2106 I and II). Claims 1-20 are directed to a judicial exception (i.e., a law of nature, natural phenomenon, or abstract idea) without significantly more. Part I: Step 2A, Prong One: Identify the Abstract Idea Under step 2A, Prong One of the Alice framework, the claims are analyzed to determine if the claims are directed to a judicial exception. MPEP §2106.04(a). The determination consists of a) identifying the specific limitations in the claim that recite an abstract idea; and b) determining whether the identified limitations fall within at least one of the three subject matter groupings of abstract ideas (i.e., mathematical concepts, mental processes, and certain methods of organizing human activity). The identified limitations of independent claim 19 (representative of independent claims 1 and 20) recite: obtaining, by a data processing module, input data comprising data of an upcoming trial protocol that is sourced over a network from and merged from multiple online databases into a multi-dimensional data structure; applying a computer-implemented feature extraction algorithm to the multi-dimensional data to extract a set of selected features including one or more statistical measures representing the input data, wherein the selected features are determined by performing feature engineering on historical clinical trial data; for each of the one or more clinical trial sites: generating a predicted site enrollment for the clinical trial site by applying a regression-based machine learning model to the selected features of the input data, and generating a predicted site default likelihood for the clinical trial site by applying a classification-based machine learning model to the selected features of the input data, wherein the second machine learning model is independently trained relative to the first machine learning model, applying, for each of the one or more clinical trial sites, multiple iterations of a computer- implemented stochastic simulation to the predicted site enrollment generated from the regression-based machine learning model and the predicted site default likelihood generated by the classification-based machine learning model to generate a plurality of quantitative values informative of enrollment timeline predictions; ranking the one or more clinical trial sites according to the predicted site enrollment and the predicted site default likelihood for the one or more clinical trial sites; selecting top-ranked clinical trial sites, wherein each of the selected clinical trial sites has a predicted site enrollment above a first threshold value and a predicted site default likelihood below a second threshold value; and rendering a computer-implemented visualization for a display device of a computer system depicting at least one of the predicted site enrollment, the predicted default likelihood for each of the one or more clinical trial sites, the top-ranked clinical trial sets, and the plurality of quantitative values informative of the enrollment timeline predictions The identified limitations, under their broadest reasonable interpretation, cover performance of the limitations in the mind (including observation, evaluation, judgment or opinion) but for the recitation of generic computer components. That is, other than reciting a processor, machine learning model (interpreted as computer), and computer, nothing in the claim elements precludes the steps form practically being performed in the mind. For example, the identified limitations encompass a user obtaining trial protocols, generating predicted site enrollment and likelihood data via calculations, and ranking and selecting clinical trial sites. The claim limitations fall within the Mental Processes groupings of abstract ideas. Thus, the claimed invention recites a judicial exception. Part I: Step 2A, prong two: additional elements that integrate the judicial exception into a practical application Under step 2A, Prong Two of the Alice framework, the claims are analyzed to determine whether the claims recite additional elements that integrate the judicial exception into a practical application. In particular, the claims are evaluated to determine if there are additional elements or a combination of elements that apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claims are more than a drafting effort designed to monopolize the judicial exception. This judicial exception is not integrated into a practical application. As a whole, the processor, machine learning model (interpreted as computer), and computer in the steps are recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Dependent claims 2-18, when analyzed as a whole, are held to be patent ineligible under 35 U.S.C. 101 because the additional recited limitations fail to establish that the claims are not directed to an abstract idea. Since these claims are directed to an abstract idea, the Office must determine whether the remaining limitations “do significantly more” than describe the abstract idea. Part II. Determine whether any Element, or Combination, Amounts to“Significantly More” than the Abstract Idea itself Under Part II, the steps of claims, when considered individually and as an ordered combination, do not improve another technology or technical field, do not improve the functioning of the computer itself, and are not enough to qualify as "significantly more". For example, the steps require no more than a conventional computer to perform generic computer functions. As stated above, the processor, machine learning model (interpreted as computer), and computer in the steps are recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer component. Therefore, based on the two-part Mayo analysis, there are no meaningful limitations in the claim that transform the exception into a patent eligible application such that the claim amounts to significantly more than the exception itself. Claims 1-20, when considered individually and as an ordered combination, are rejected as ineligible subject matter under 35 U.S.C. 101. Dependent claims 2-18 when analyzed as a whole are held to be patent ineligible under 35 U.S.C. 101 because the additional claims do no recite significantly more than an abstract idea. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Bhattacharya et al. (US 2021/0241861 A1), Patient Recruitment Platform Bhattacharya et al. (US 2021/0319158 A1), METHODS AND SYSTEM FOR REDUCING COMPUTATIONAL COMPLEXITY OF CLINICAL TRIAL DESIGN SIMULATIONS Bhattacharya et al. (US 2021/0241865 A1), TRIAL DESIGN BENCHMARKING PLATFORM Bhattacharya et al. (US 2021/0241866 A1), INTERACTIVE TRIAL DESIGN PLATFORM The aforementioned references disclose, “obtaining input data comprising data of an upcoming trial protocol” (Paragraph [0252] of PGpubs ‘865 and ‘861: a method for evaluating a design may include obtaining a criteria for a trial design study 902. The criteria may be obtained from the user or from other parts of the platform based on a user input and/or historical data) and a clinical trial site selection process (Paragraphs [0548] of PGpubs ‘865 and ‘861: FIG. 104 shows an embodiment of a platform/system for evaluation and comparison of site selections for a clinical trial) as well as using a Monte Carlo simulation as machine learning (Paragraphs [0558] of PGpubs ‘865 and ‘861: Evaluating a site selection may include using a Monte Carlo approach to simulate a site selection for different values according to the deviation specifications and using statistical methods to determine the performance of the site selection from a simulation run). However, the aforementioned references explicitly teach “for each of the one or more clinical trial sites: generating a predicted site enrollment and a predicted site default likelihood for the clinical trial site by applying one or more machine learning models to selected features of the input data; ranking the one or more clinical trial sites according to the predicted site enrollment and the predicted site default likelihood for the one or more clinical trial sites; and selecting top-ranked clinical trial sites, wherein each of the selected clinical trial sites has a predicted site enrollment above a first threshold value and a predicted site default likelihood below a second threshold value, wherein the selected features are previously determined by performing feature engineering on historical clinical trial data” as recited in claims 1, 19, and 20 because the machine learning models of Bhattacharya are not used to generate predicted site enrollment and a predicted site default likelihood, there is no ranking of clinical trial sites, and selection of clinical trial sites is not based on a predicted site enrollment above a first threshold value and a predicted site default likelihood below a second threshold value. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHINYERE MPAMUGO whose telephone number is (571)272-8853. The examiner can normally be reached Monday-Friday, 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kambiz Abdi can be reached at (571) 272-6702. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHINYERE MPAMUGO/ Primary Examiner, Art Unit 3685
Read full office action

Prosecution Timeline

Mar 06, 2024
Application Filed
May 31, 2025
Non-Final Rejection — §101
Sep 03, 2025
Response Filed
Nov 01, 2025
Final Rejection — §101
Mar 05, 2026
Request for Continued Examination
Mar 26, 2026
Response after Non-Final Action
Mar 27, 2026
Non-Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586024
DIGITAL TWIN BASED SYSTEMS AND METHODS FOR BUSINESS CONTINUITY PLAN AND SAFE RETURN TO WORKPLACE
2y 5m to grant Granted Mar 24, 2026
Patent 12579550
METHOD AND SYSTEM FOR EMERGENT DATA PROCESSING
2y 5m to grant Granted Mar 17, 2026
Patent 12562241
SYSTEM AND METHOD FOR DETECTING ISSUES IN CLINICAL STUDY SITE AND SUBJECT COMPLIANCE
2y 5m to grant Granted Feb 24, 2026
Patent 12537073
GENETIC MODEL VALIDATION METHODS
2y 5m to grant Granted Jan 27, 2026
Patent 12537081
INTERVERTEBRAL CAGE WITH INTEGRATED TRANSMITTER
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
27%
Grant Probability
54%
With Interview (+27.2%)
4y 0m
Median Time to Grant
High
PTA Risk
Based on 328 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month