Prosecution Insights
Last updated: April 19, 2026
Application No. 18/526,197

Line of Therapy Identification from Clinical Documents

Final Rejection §101
Filed
Dec 01, 2023
Examiner
SOREY, ROBERT A
Art Unit
3682
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Bristol-Myers Squibb Company
OA Round
2 (Final)
48%
Grant Probability
Moderate
3-4
OA Rounds
4y 2m
To Grant
94%
With Interview

Examiner Intelligence

Grants 48% of resolved cases
48%
Career Allow Rate
220 granted / 456 resolved
-3.8% vs TC avg
Strong +46% interview lift
Without
With
+45.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
25 currently pending
Career history
481
Total Applications
across all art units

Statute-Specific Performance

§101
30.9%
-9.1% vs TC avg
§103
35.8%
-4.2% vs TC avg
§102
8.4%
-31.6% vs TC avg
§112
20.4%
-19.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 456 resolved cases

Office Action

§101
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims In the amendment filed 12/01/2025 the following occurred: Claims 1 and 11 were amended. Claims 1-20 are presented for examination. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claims 1-20 are drawn to a method and a system, which is/are statutory categories of invention (Step 1: YES). Independent claim 1 recites receiving input data comprising unstructured text representing one or more sequences of terms; for each respective sequence of terms: generating, using regular expression rules, a corresponding line of therapy (LoT) pseudo-label indicating whether the respective sequence of terms comprises LoT information, wherein the corresponding LoT pseudo-label comprises a binary classification that indicates a first ground-truth value when the respective sequence of terms comprises LoT information and indicates a second ground-truth value when the respective sequence of terms does not comprise LoT information; generating a corresponding LoT indicator predicting whether the respective sequence of terms comprises LoT information, wherein the corresponding LoT indicator comprises a binary classification that indicates a first value when the respective sequence of terms comprises LoT information and indicates a second value when the respective sequence of terms does not comprise LoT information; and determining a corresponding LoT indication loss based on the corresponding LoT pseudo-label and the corresponding LoT indicator; and fine-tuning based on the LoT indication losses determined for the one or more sequences of terms. Independent claim 11 recites receiving input data comprising unstructured text representing one or more sequences of terms; for each respective sequence of terms: generating, using regular expression rules, a corresponding line of therapy (LoT) pseudo-label indicating whether the respective sequence of terms comprises LoT information, wherein the corresponding LoT pseudo-label comprises a binary classification that indicates a first ground-truth value when the respective sequence of terms comprises LoT information and indicates a second ground-truth value when the respective sequence of terms does not comprise LoT information; generating a corresponding LoT indicator predicting whether the respective sequence of terms comprises LoT information, wherein the corresponding LoT indicator comprises a binary classification that indicates a first value when the respective sequence of terms comprises LoT information and indicates a second value when the respective sequence of terms does not comprise LoT information; and determining a corresponding LoT indication loss based on the corresponding LoT pseudo-label and the corresponding LoT indicator; and fine-tuning based on the LoT indication losses determined for the one or more sequences of terms. The recited limitations, as drafted, under their broadest reasonable interpretation, cover certain methods of organizing human activity and mathematical concepts. As reflected by the specification, the present claim(s) “relates to line of therapy identification from clinical documents” (see: specification paragraph 2). If a claim limitation, under its broadest reasonable interpretation, covers managing personal behavior or relationships or interactions between people, then it falls within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. The present claims cover certain methods of organizing human activity because it addresses a problem where “[c]linical trial documents include vast amounts of clinical information for different entities” and where the claimed solution for “[e]xtracting and classifying these entities correctly may facilitate the design process of clinical trials” so as to provide a solution where the (line of therapy) “LoT information of prior treatments can be used to include, or exclude, patients and then the current treatment may be placed as the next LoT”, and/or “doctors may choose a LoT for a patient based on his or her condition and provide treatment based on established guidelines pertinent to that LoT” (see: specification paragraph 24). The present invention further addresses a training process for “fine-tuning” a model such as “based on the LoT indication losses determined for the one or more sequences of terms” (see: specification paragraph 4 and 56). If a claim limitation, under its broadest reasonable interpretation, covers mathematical relationships, or mathematical formulas or equations, or mathematical calculations, then it falls within the “Mathematical Concepts” grouping of abstract ideas. The present claims cover mathematical concepts because they address a problem where a “model requires large amounts of labeled training data…labeling large amounts of data is time consuming and expensive as it requires manual annotation by subject matter experts” (see: specification paragraph 40). The present claims address this problem with a “fine-tuning stage [] of the training process [which] utilizes weakly annotated data to train (e.g., semi-supervised training data) to train the BioBert model” (see: specification paragraph 40), where fine-tuning process is mathematical because the “fine-tuning stage [] fine-tunes the BioBert model [] (e.g., updates parameters of the BioBert model []) based on the LoT indication losses [] determined for the one or more sequences of terms…to detect whether sequences of terms [] include LoT information or not” (see: specification paragraph 37 and 39). Accordingly, the claims recite an abstract idea(s) (Step 2A Prong One: YES). This judicial exception is not integrated into a practical application. The claims are abstract but for the inclusion of the additional elements including an “computer-implemented…executed on data processing hardware causes the data processing hardware to perform operations comprising…using a pre-trained transformer model…the pre-trained transformer model…” (claim 1), “using the pre-trained transformer model…the pre-trained transformer model determines…the pre-trained transformer model determines…the pre-trained transformer model…” (claim 2 and 12), “the pre-trained transformer model comprises a Bidirectional Encoder from Transformers for Biomedical Text Mining (BioBERT) model…” (claim 6 and 16), “the pre-trained BioBERT model comprises a stack of multi-headed self-attention layers” (claim 7 and 17), “the pre-trained transformer model…by the pre-trained transformer model” (claim 8 and 18), “storing the fine-tuned transformer model in memory hardware in communication with the data processing hardware” (claim 9 and 19), “transmitting, via a network, the fine-tuned transformer model to one or more computing devices” (claim 10 and 20), and “data processing hardware; and memory hardware in communication with the data processing hardware, the memory hardware storing instructions that when executed on the data processing hardware cause the data processing hardware to perform operations comprising…using a pre-trained transformer model…the pre-trained transformer model determines…the pre-trained transformer model determines…the pre-trained transformer model…” (claim 11), which are additional elements that are recited at a high level of generality (e.g., the “data processing hardware” is configured to perform functions through no more than a statement than that said data processing hardware is “to perform operations”; the ”pre-trained transformer model” is configured though no more than a statement than that functions are performed “using” said pre-trained transformer model, where the pre-trained transformer model may be a “pre-trained BioBERT model” configured though no more than a statement than that the said BioBERT model is pre-trained “on” a corpus of biomedical text data and “comprises” a stack of multi-headed self-attention layers; the “memory hardware” is configured to store information though no more than a statement than that such function is performed by being “in communication with” the data processing hardware; the “network” is configured though no more than a statement than that transmitted is performed “via” said network “to” one or more computing devices) such that they amount to no more than mere instruction to apply the exception using generic computer components. See: MPEP 2106.05(f). The combination of these additional elements is no more than mere instructions to apply the exception using generic computer components. Accordingly, even in combination, these additional elements do not integrate the abstract idea(s) into a practical application because they do not impose any meaningful limits on practicing the abstract idea(s). Accordingly, the claims are directed to an abstract idea(s) (Step 2A Prong Two: NO). The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea(s) into a practical application, using the additional elements to perform the abstract idea(s) amounts to no more than mere instructions to apply the exception using generic components. Mere instructions to apply an exception using generic components cannot provide an inventive concept. See MPEP 2106.05(f). Viewing the limitations as an ordered combination, the claims simply instruct the additional elements to implement the concept described above in the identification of abstract idea(s) with routine, conventional activity specified at a high level of generality in a particular technological environment. Hence, the claims as a whole, considering the additional elements individually and as an ordered combination, do not amount to significantly more than the abstract idea(s) (Step 2B: NO). Dependent claim(s) 2-10 and 12-20, when analyzed as a whole, considering the additional elements individually and/or as an ordered combination, are held to be patent ineligible under 35 U.S.C. 101 because the additional recited limitation(s) fail(s) to establish that the claim(s) is/are not directed to an abstract idea(s) without significantly more. These claims fail to remedy the deficiencies of their parent claims above, and are therefore rejected for at least the same rationale as applied to their parent claims above, and incorporated herein. Response to Arguments Applicant’s arguments from the response filed on 12/01/2025 have been fully considered and will be addressed below in the order in which they appeared. In the remarks, Applicant argues in substance that (1) the 35 U.S.C. 101 rejections should be withdrawn in view of the amendments because “the character of claim 1 as a whole is not directed to an abstract idea. The Examiner incorrectly characterizes the claim limitations as falling into the "Certain Methods of Organizing Human Activity" and "Mathematical Concepts" groupings. This interpretation is a misapplication of the "mental process" grouping which limits abstract ideas to those that "can practically be performed in the human mind, including, for example, observations, evaluations, judgments, and opinions." (See 2025 Guidance Memo, Section II.A). The claimed method, as a whole, describes a technical solution for solving a technological problem that cannot be practically performed in the human mind. The Examiner's rejection fails to account for the specific, highly technical limitations in the claimed invention, which shift its character away from a mere abstract idea and toward a tangible technological solution. The 2025 Guidance Memo explicitly states: "a claim does not recite a mental process when it contains limitation(s) that cannot practically be performed in the human mind, for instance when the human mind is not equipped to perform the claim limitation(s)." The claimed method is directed to generating pseudo-labels and fine-tuning a pre-trained transformer model for LoT identification from large volumes of unstructured clinical text, which is an inherently technological task far beyond human capability...While a human can manually scan a document and apply simple rules, the claim requires the algorithmic application of regular expression rules (regex) over an unstructured text dataset to generate a binary pseudo-label (ground-truth value) for potentially millions of sequences of terms (sentences) in a way that is rule-based, deterministic, and scalable. See Applicant's specification at Paragraphs [0032]-[0034]). The specification highlights the complexity and scale of the problem, noting the "vast amounts of clinical information" in documents, which are in an "unstructured text format". See Paragraphs [0025] and [0027]. This limitation is not merely a mental "observation" or "judgment." It is a technical process executed by the regex module 120 which is a "rule-based, deterministic model" (see Paragraph [0033]) that must process an entire corpus of unstructured text. Applicant respectfully submits that the human mind cannot practically, rapidly, and deterministically apply complex regular expression patterns to thousands of sequences of terms to output a labeled dataset with binary classifications. This is a technical step that relies on the speed and pattern-matching capabilities of the data processing hardware…The use of a "pre-trained transformer model" (specifically the BioBert model 300, as detailed in Applicant's specification at Paragraphs [0036]-[0038] and [0045] inherently involves complex, nondeterministic machine learning computations on a massive scale. For instance, a transformer model contains a stack of multi-headed self-attention layers (see Paragraph [0045]) and is a neural network model (see Paragraph [0036]) that performs operations far beyond human capability. Moreover, the transformer model must first be pre-trained on "large-scale biomedical texts such as biomedical and life license literature abstracts and/or full-text articles". See Paragraph [0037]. The human mind cannot process or encode billions of text tokens in this manner. The model then generates a non-deterministic (See Paragraph [0036]) prediction (LoT indicator). This is not a simple human judgment but a computationally intensive output from a highly complex, deep-learning architecture...The claimed invention further recites determining a corresponding LoT indication loss based on the corresponding LoT pseudo-label and the corresponding LoT indicator and finetuning the pre-trained transformer model based on the LoT indication losses determined for the one or more sequences of terms. These steps are the core of a weak supervision-based machine learning training process. See Paragraphs [0028] and [0041]. Determining loss is a mathematical calculation (e.g., cross-entropy loss) performed by the loss module 240 to quantify the error between the pseudo-label (ground truth) and the model's prediction and fine-tuning requires updating the millions of parameters (weights) of the complex BioBert transformer model through a computationally intensive backpropagation process based on the determined loss. See Applicant's specification at Paragraph [0038]. The human mind cannot practically perform this parameter update for a deep learning model. The Examiner's characterization of this as simply a "mathematical concept" (see Office Action bridging pages 3 and 4) is an oversimplification…This is a function that can only be performed by a powerful, machine-based computing system…The specific limitations-the generation of a binary pseudo-label using regular expressions, the generation of a prediction using a pretrained transformer model, and the subsequent fine-tuning of that highly complex neural network based on a calculated loss-cannot practically be performed in the human mind or by a human with a pen and paper…” The Examiner respectfully disagrees. Applicant’s arguments are not persuasive. It is argued that the claims have been incorrectly classified “as falling into the "Certain Methods of Organizing Human Activity" and "Mathematical Concepts" groupings. This interpretation is a misapplication of the "mental process" grouping which limits abstract ideas to those that "can practically be performed in the human mind, including, for example, observations, evaluations, judgments, and opinions." (See 2025 Guidance Memo, Section II.A).” However, Certain Methods of Organizing Human Activity, Mathematical concepts, and Mental Processes are all separate groupings of abstract ideas. The 2025 Guidance Memo, Section II.A, cited in the argument makes this clear by stating that: “The USPTO’s subject matter eligibility analysis distills the relevant case law into three enumerated groupings of abstract ideas: mathematical concepts, certain methods of organizing human activity, and mental processes.” The memo then goes on to focus upon the Metal Process grouping of abstract ideas and Office provided Example 47, which does concern training a neural network, but does not address such as a Mental Process. The arguments then proceed to address the rejection as if it relies upon the Mental Processes grouping of abstract ideas, but as argued, the claims have been “classified as falling into the "Certain Methods of Organizing Human Activity" and "Mathematical Concepts" groupings.” Though an attempt will be made to address sections of argument relevant to the rejection as written, the arguments are not persuasive because they present a logical fallacy known as the straw man fallacy in that they misrepresent and misconstrue the position of the rejection as written. The rejection does not rely upon the Mental Processes grouping of abstract ideas and makes no assertion that the claimed abstract idea be could be performable in the human mind. However, though not necessary to support the rejection as written, these arguments are not persuasive in view of the broad claims. Though the claims are read in view of the specification, the limitations themselves provide the scope of claimed invention. It is argued that human mind could not practically performed the claimed invention due to “large volumes of unstructured clinical text…"vast amounts of clinical information" in documents…thousands of sequences of terms…The human mind cannot process or encode billions of text tokens…and fine-tuning requires updating the millions of parameters”, but the claims as written are not limited to such large volumes of data and complex processing as argued. For example, the claims only require “unstructured text representing one or more sequences of terms” – a single term. And “expression rules” – only two rules need be considered to meet the broad language of the number of rules used in generating. The claimed “fine-tuning the pre-trained transformer model” makes no mention of millions of parameters and merely require that fine-tuning be performed be “based on” LoT indication “losses” – only two losses need be considered to meet the broad language. Withing the scope of the claims, such models may be simple enough to be performable by human minds. Regardless, these are moot points as the groupings utilized in the rejection do not require the claimed abstract idea be performable in the human mind, and even if more data, rules, and complexity were claimed, such would still be abstract. It is also argued that “use of a "pre-trained transformer model" (specifically the BioBert model 300, as detailed in Applicant's specification at Paragraphs [0036]-[0038] and [0045] inherently involves complex, nondeterministic machine learning computations on a massive scale”. But the pre-trained transformer model is not characterized as being abstract in the rejection – it is specifically indicated as being an additional element. To be significantly more, an additional element must amount to more than mere instruction to apply the exception using generic computer components, but the “pre-trained transformer model” is configured though no more than a statement than that functions are performed “using” said pre-trained transformer model, where the pre-trained transformer model may be a “pre-trained BioBERT model” configured though no more than a statement than that the said BioBERT model is pre-trained “on” a corpus of biomedical text data and “comprises” a stack of multi-headed self-attention layers. Here the claimed “using” of the pre-trained transformer model is equivalent to “apply it”, and further, the model itself is one that is well-understood, routine, and conventional in the industry – it may be the BioBERT model, which is a known pre-trained language representation model designed for the biomedical field, and it is common and standard practice to fine-tune a BioBERT model for specific biomedical natural language processing (NLP) tasks. As a domain-specific model, BioBERT is designed to be further trained on smaller, specialized datasets (e.g., named entity recognition, relation extraction). Hence, the use of a pre-trained transformer model, while being an additional element, does not amount more than mere instruction to apply the exception using generic computer components such that they amount to no more than mere instruction to apply the exception using generic computer elements. The claims here are not directed to a specific improvement to computer functionality that amount to a practical application. Rather, they are directed to the use of conventional or generic technology in a well-known environment, without any claim that the invention reflects an inventive solution to a technical problem presented by combining the two. In the present case, the claims fail to recite any elements that individually or as an ordered combination transform the identified abstract idea(s) in the rejection into a patent-eligible application of that idea. In the remarks, Applicant argues in substance that (2) the 35 U.S.C. 103 rejections should be withdrawn in view of the amendments. The rejections and been withdrawn in view of the amendments. As per independent claims 1 and 11, the closest prior art of record - U.S. Patent Application Publication 2020/0388396 to Lindvall, U.S. Patent Application Publication 2021/0057071 to Barber, and CN 113095081 A to Jiang - neither alone nor in combination teach the invention as claimed as they do not teach, in combination with the previously claimed limitations, the amendments requiring “wherein the corresponding LoT pseudo-label comprises a binary classification that indicates a first ground-truth value when the respective sequence of terms comprises LoT information and indicates a second ground-truth value when the respective sequence of terms does not comprise LoT information…wherein the corresponding LoT indicator comprises a binary classification that indicates a first value when the respective sequence of terms comprises LoT information and indicates a second value when the respective sequence of terms does not comprise LoT information”; therefore, the closest prior art of record does not anticipate or otherwise render the claimed invention obvious. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure can be found on the attached PTO-892 form, including: U.S. Patent Application Publication 2021/0125731 Lefkofsky (see para 13); CN 11,1145,910 A1 to Li (abstract); U.S. Patent Application Publication 2025/0201375 to Basier (see whole document); U.S. Patent Application Publication 2012/0239410 to Bergstrom (see para 20). Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT A SOREY whose telephone number is (571)270-3606. The examiner can normally be reached Monday through Friday, 8am to 5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fonya Long can be reached at (571) 270-5096. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ROBERT A SOREY/Primary Examiner, Art Unit 3682
Read full office action

Prosecution Timeline

Dec 01, 2023
Application Filed
Sep 19, 2025
Non-Final Rejection — §101
Dec 01, 2025
Response Filed
Feb 03, 2026
Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603174
METHOD FOR UTILIZING A MEDICAL SERVICES KIOSK
2y 5m to grant Granted Apr 14, 2026
Patent 12597517
METHOD FOR EXTRACTING INTRINSIC PROPERTIES OF CANCER CELLS FROM GENE EXPRESSION PROFILES OF CANCER PATIENTS AND DEVICE FOR THE SAME
2y 5m to grant Granted Apr 07, 2026
Patent 12592301
PROMPT ENGINEERING AND GENERATIVE AI FOR GOAL-BASED IMAGERY
2y 5m to grant Granted Mar 31, 2026
Patent 12567009
EQUITABLY ASSIGNING MEDICAL IMAGES FOR EXAMINATION
2y 5m to grant Granted Mar 03, 2026
Patent 12555682
MEDICAL SERVICES KIOSK
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
48%
Grant Probability
94%
With Interview (+45.8%)
4y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 456 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month