DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This action is in reply to the application filed on 10/23/24.
Claims 1-22 are currently pending and have been examined.
Continuity/Priority
Status of this application as a 371 of PCT/US2022/032084, filed on 06/03/22 is acknowledged. A certified copy of foreign priority was received on 10/23/24. Applicant’s claim to the benefit of and priority to US Provisional Application 63/335,215, filed 04/26/22, is acknowledged. Accordingly, a priority date of 04/26/22 has been given to this application.
IDS
The information disclosure statement (IDS) submitted on 10/23/24 and 08/28/25 have been considered by the examiner. The submission is in compliance with the provisions of 37 CFR 1.97.
Claim Objections
Claims 13-17 are objected to because of the following informalities: The claims are directed to “at least one computer-readable storage medium”. The specification discloses, at para. [0129], “The terms non-transitory computer readable medium, non-transitory computer readable storage medium, non-transitory machine readable medium, and non-transitory machine readable storage medium are expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media. As used herein, the terms “computer readable storage device” and “machine readable storage device” are defined to include any physical (mechanical and/or electrical) structure to store information, but to exclude propagating signals and to exclude transmission media”. The specification does not explicitly exclude propagating signals and transmission media from “computer-readable storage medium”, as recited by Claims 13-17. Examiner recommends amending Claims 13-17 to recite “non-transitory computer-readable storage medium”. For purposes of examination, Examiner is interpreting Claims 13-17 to recite “non-transitory computer-readable storage medium” such that it falls into a statutory category. Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-22 are rejected under 35 U.S.C.101 because the claimed invention is directed to a judicial exception (an abstract idea) without significantly more.
Step 1
Claims 1-12 are drawn to an apparatus, Claims 13-17 are drawn to a computer-readable storage medium (interpreted as non-transitory computer-readable storage medium for examination purposes), and Claims 18-22 are drawn to a method, each of which are within the four statutory categories. Claims 1-22 are further directed to an abstract idea on the grounds set out in detail below.
Step 2A Prong 1
Claim 1 recites implementing the steps of:
processing input data pulled from a record to form a set of candidate features;
testing at least a first model and a second model to compare performance of the first model and the second model;
selecting at least one of the first model or the second model based on the comparison
The above steps are capable of being performed mentally or with the aid of pen and paper, and therefore are directed to an abstract idea (mental processes). Fundamentally, the process is that of processing data to form a set of candidate features, testing at least two models to compare performance, and then selecting at least one model based on the performance comparison. A human being, with or without the aid of pen and paper, is capable of processing input data to form a set of candidate features (e.g., performing a data analysis and thinking about data to identify candidate features), testing at least a first and second model to compare performance of the models, and then selecting at least one of the models based on the comparison.
Independent claim 13 and claim 18 recite similar limitations and also recite an abstract idea under the same analysis.
The above claims are therefore directed to an abstract idea.
Step 2A Prong 2
This judicial exception is not integrated into a practical application because the additional
elements within the claims only amount to:
A. Instructions to Implement the Judicial Exception. MPEP 2106.05(f)
The independent claims additionally recite:
an apparatus comprising: memory circuitry; instructions; and processor circuitry to execute the instructions as implementing the steps of the abstract idea (Claim 1)
at least one computer-readable storage medium comprising instructions which, when executed by processor circuitry as implementing the steps of the abstract idea (Claim 13)
a “computer-implemented” method (Claim 18)
train(ing) at least a first model and a second model using the set of candidate features (Claims 1, 13, 18)
deploy(ing) the selected at least one of the first model or the second model to predict a likelihood of at least one of: a) a toxicity occurring due to immunotherapy according to a treatment plan or b) efficacy of the treatment plan for a patient (Claims 1, 13, 18)
The broad recitation of general purpose computing elements at a high level of generality only amounts to mere instructions to implement the abstract idea using computing components as tools. Regarding the additional elements identified above, these are understood to be general purpose computing elements functioning in their ordinary capacity:
Regarding “processor circuitry”, para. [0132] discloses, “For example, the processor circuitry 1512 can be implemented by one or more integrated circuits, logic circuits, FPGAs, microprocessors, CPUs, GPUs, DSPs, and/or microcontrollers from any desired family or manufacturer. The processor circuitry 1512 may be implemented by one or more semiconductor based (e.g., silicon based) devices”.
Regarding the instructions, para. [0125] discloses that this “may be one or more executable programs or portion(s) of an executable program for execution by processor circuitry. The program may be embodied in software stored on one or more non-transitory computer readable storage media such as a compact disk (CD), a floppy disk, a hard disk drive (HDD), a solid-state drive (SSD), a digital versatile disk (DVD), a Blu-ray disk, a volatile memory (e.g., Random Access Memory (RAM) of any type, etc.), or a non-volatile memory (e.g., electrically erasable programmable read-only memory (EEPROM), FLASH memory, an HDD, an SSD, etc.) associated with processor circuitry located in one or more hardware devices”.
Regarding the “memory circuitry”, para. [0134] discloses “The processor circuitry 1512 of the illustrated example includes a local memory 1513 (e.g., a cache, registers, etc.). The processor circuitry 1512 of the illustrated example is in communication with a main memory including a volatile memory 1514 and a non-volatile memory 1516 by a bus 1518. The volatile memory 1514 may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS® Dynamic Random Access Memory (RDRAM®), and/or any other type of RAM device. The non-volatile memory 1516 may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory 1514, 1516 of the illustrated example is controlled by a memory controller 1517.
Regarding the computer readable storage medium, para. [0129] discloses that this includes “any type of computer readable storage device and/or storage disk”.
Regarding the computer that implements the method, this is understood to be one of various general purpose computing devices per para. [0132] (“a server, a personal computer, a workstation, a self-learning machine (e.g., a neural network), a mobile device (e.g., a cell phone, a smart phone, a tablet such as an iPad™), a personal digital assistant (PDA), an Internet appliance, a DVD player, a CD player, a digital video recorder, a Blu-ray player, a gaming console, a personal video recorder, a set top box, a headset (e.g., an augmented reality (AR) headset, a virtual reality (VR) headset, etc.) or other wearable device, or any other type of computing device).
Regarding “train(ing) at least a first model and a second model using the set of candidate features”, this limitation is recited as being performed by a computer recited at a high level of generality.
Regarding “deploy(ing) the selected at least one of the first model or the second model to predict a likelihood of at least one of: a) a toxicity occurring due to immunotherapy according to a treatment plan or b) efficacy of the treatment plan for a patient”, this only amounts to mere instructions to implement an abstract idea on a general purpose computer. Implementing an abstract idea using a general purpose computer or components thereof does not integrate the judicial exception into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
B. Insignificant Extra-Solution Activity. MPEP 2106.05(g)
Claims 1, 13, 18 additionally recite:
store/storing the selected at least one of the first model or the second model;
As explained above, the independent claims are directed to an abstract idea in the form of utilizing a trained, tested model to predict either toxicity occurring due to immunotherapy according to a treatment plan or efficacy of a treatment plan for a patient. As stated in MPEP 2106.05(g), "[t]he term "extra-solution activity" can be understood as activities incidental to the primary process or product that are merely a nominal or tangential addition to the claim." In the present claim, the function of store/storing the selected at least one of the first model or the second model is only nominally or tangentially related to the process of training and testing a model to utilize to predict toxicity or efficacy of a treatment plan, and accordingly constitutes insignificant extra-solution activity.
These elements in Sections A and B above are therefore not sufficient to integrate the abstract idea into a practical application. Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually.
The above claims, as a whole, are therefore directed to an abstract idea.
Step 2B
The present claims do not include additional elements that are sufficient to amount to
more than the abstract idea because the additional elements or combination of elements amount to no more than a recitation of:
A. Instructions to Implement the Judicial Exception. MPEP 2106.05(f)
As explained above, claims 1, 13, 18 only recite the aforementioned computing elements as tools for performing the steps of the abstract idea, and mere instructions to perform the abstract idea using a computer is not sufficient to amount to significantly more than the abstract idea. MPEP 2106.05(f).
B. Insignificant Extra-Solution Activity. MPEP 2106.05(g)
Likewise, as explained above, the step of store/storing the selected at least one of the first model or the second model, only amounts to insignificant application of the abstract idea.
C. Well-Understood, Routine and Conventional Activities. MPEP 2106.05(d)
In addition to amounting to insignificant extra-solution activity the elements above constitute well-understood, routine and conventional activity. In the interest of completeness, Examiner notes that the training of an AI/ML model using candidate features is also well understood, routine and conventional in the art as evidenced by: US Publication 20200077931A1 at [0059]; US Publication 20170083682A1 at [0211]-[0212]; US Publication 20210076953A1 at [0009], [0153]. These references also indicate, by implication, that AI/ML algorithms are well-understood, routine and conventional and that utilization/implementation of trained AI/ML models is well-understood, routine and convention in the art. Thus, these features when considered individually and as an ordered combination do not provide significantly more.
The step of store/storing the selected at least one of the first model or the second model only amounts to storing/retrieving data in memory, which has been previously held to be well-understood, routine and conventional when claimed at a high level of generality or as insignificant extra-solution activity. See MPEP 2106.05(d)(II).
Thus, taken alone, the additional elements do not amount to significantly more than the
above-identified judicial exception. Looking at the limitations as an ordered combination adds
nothing that is not already present when looking at the elements taken individually. Their
collective functions merely provide conventional computer implementation.
Depending Claims
Claim 2 recites limitations pertaining to deploying the selected at least one of the first model or the second model in a tool with an interface to facilitate gathering of patient data and interaction with the selected at least one of the first model or the second model, which only amounts to mere instructions to apply the abstract idea on a computer. This is not sufficient to integrate the judicial exception into a practical application or amount to significantly more.
Claim 3 recites limitations pertaining to wherein the input data includes at least one of laboratory test results, diagnosis code, or billing codes, which only further narrows the scope of the abstract idea. This is not sufficient to integrate the judicial exception into a practical application or amount to significantly more.
Claim 4 recites limitations pertaining to wherein the toxicity includes at least one of pneumonitis, colitis, or hepatitis, which only further narrows the scope of the abstract idea. This is not sufficient to integrate the judicial exception into a practical application or amount to significantly more.
Claim 5 recites limitations pertaining to wherein the efficacy of the treatment plan for the patient is measured by at least one of patient survival or time on treatment, which only further narrows the scope of the abstract idea. This is not sufficient to integrate the judicial exception into a practical application or amount to significantly more.
Claim 6 recites limitations pertaining to extract and organize the input data in a time series, which is also directed to an abstract idea (mental processes), as a human being could, with or without the aid of pen and paper, extract and organize data into a time series. Recitation of “the processor circuitry” only amounts to mere instructions to apply the abstract idea. This is not sufficient to integrate the judicial exception into a practical application or amount to significantly more.
Claim 7 recites limitations pertaining to align the input data with respect to an anchor point to organize the input data in the time series, which is also directed to an abstract idea (mental processes), as a human being could, with or without the aid of pen and paper, align data with respect to an anchor point (interpreted as a fixed timepoint) to organize the time series data. Recitation of “the processor circuitry” only amounts to mere instructions to apply the abstract idea. This is not sufficient to integrate the judicial exception into a practical application or amount to significantly more.
Claim 8 recites limitations pertaining to generate labels for the input data to form the set of candidate features, which is also directed to an abstract idea (mental processes), as a human being could label input data to form a set of candidate features, with or without the aid or pen and paper. Recitation of “the processor circuitry” only amounts to mere instructions to apply the abstract idea. This is not sufficient to integrate the judicial exception into a practical application or amount to significantly more.
Claim 9 recites limitations pertaining to feature engineer the set of candidate features by at least one of normalizing, transforming, or extracting from the set of candidate features, which is also directed to an abstract idea (mental processes), as a human being could, with or without the aid of pen and paper, normalize, transform or extract data from a set of candidate features. Recitation of “the processor circuitry” only amounts to mere instructions to apply the abstract idea. This is not sufficient to integrate the judicial exception into a practical application or amount to significantly more.
Claim 10 recites limitations pertaining to select from the set of candidate features to form a patient feature set to at least one of train or test at least the first model and the second model based on the feature engineering, which is also directed to an abstract idea (mental processes), as a human being could, with or without the aid of pen and paper, use candidate features to test a model. Recitation of “the processor circuitry” which only amounts to mere instructions to apply the abstract idea. This is not sufficient to integrate the judicial exception into a practical application or amount to significantly more.
Claim 11 recites limitations pertaining to wherein the processor circuitry is to generate a feature matrix to at least one of train or test at least the first model and the second model based on the feature engineering, which is also directed to an abstract idea (mental processes), as a human being could, with or without the aid of pen and paper, generate a feature matrix to test a model, but for recitation of “the processor circuitry” which only amounts to mere instructions to apply the abstract idea. This is not sufficient to integrate the judicial exception into a practical application or amount to significantly more.
Claim 12 recites limitations pertaining to wherein the processor circuitry is to deploy the selected at least one of the first model or the second model as an executable tool with an interface, which only amounts to mere instructions to apply the abstract idea on a computer. This is not sufficient to integrate the judicial exception into a practical application or amount to significantly more.
Claim 14 recites limitations pertaining to cause the processor circuitry to deploy the selected at least one of the first model or the second model in a tool with an interface to facilitate gathering of patient data and interaction with the selected at least one of the first model or the second model, which only amounts to mere instructions to apply the abstract idea on a computer. This is not sufficient to integrate the judicial exception into a practical application or amount to significantly more.
Claim 15 recites limitations pertaining to wherein the instructions, when executed, cause the processor circuitry to extract and organize the input data in a time series with respect to an anchor point, which is also directed to an abstract idea (mental processes), as a human being could, with or without the aid of pen and paper, extract and organize input data with respect to an anchor point (interpreted as a fixed timepoint) to organize the time series data, but for recitation of “instructions” and “the processor circuitry” which only amounts to mere instructions to apply the abstract idea. This is not sufficient to integrate the judicial exception into a practical application or amount to significantly more.
Claim 16 recites limitations pertaining to, wherein the instructions, when executed, cause the processor circuitry to generate labels for the input data to form the set of candidate features, which is also directed to an abstract idea (mental processes), as a human being could label input data to form a set of candidate features, with or without the aid or pen and paper, but for recitation of “instructions” and “the processor circuitry” which only amounts to mere instructions to apply the abstract idea. This is not sufficient to integrate the judicial exception into a practical application or amount to significantly more.
Claim 17 recites limitations pertaining to wherein the instructions, when executed, cause the processor circuitry to feature engineer the set of candidate features by at least one of normalizing, transforming, or extracting from the set of candidate features, which is also directed to an abstract idea (mental processes), as a human being could, with or without the aid of pen and paper, normalize, transform or extract data from a set of candidate features, but for recitation of “instructions” and “the processor circuitry” which only amounts to mere instructions to apply the abstract idea. This is not sufficient to integrate the judicial exception into a practical application or amount to significantly more.
Claim 19 recites limitations pertaining to, wherein the deploying includes deploying the selected at least one of the first model or the second model in a tool with an interface to facilitate gathering of patient data and interaction with the selected at least one of the first model or the second model, which only amounts to mere instructions to apply the abstract idea on a computer. This is not sufficient to integrate the judicial exception into a practical application or amount to significantly more.
Claim 20 recites limitations pertaining to, further including extracting and organizing the input data in a time series with respect to an anchor point, which is also directed to an abstract idea (mental processes), as a human being could, with or without the aid of pen and paper, extract and organize input data with respect to an anchor point (interpreted as a fixed timepoint) to organize the time series data. This is not sufficient to integrate the judicial exception into a practical application or amount to significantly more.
Claim 21 recites limitations pertaining to, further including generating labels for the input data to form the set of candidate features, which is also directed to an abstract idea (mental processes), as a human being could label input data to form a set of candidate features, with or without the aid or pen and paper. This is not sufficient to integrate the judicial exception into a practical application or amount to significantly more.
Claim 22 recites limitations pertaining to, further including feature engineering the set of candidate features by at least one of normalizing, transforming, or extracting from the set of candidate features, which is also directed to an abstract idea (mental processes), as a human being could, with or without the aid of pen and paper, normalize, transform or extract data from a set of candidate features. This is not sufficient to integrate the judicial exception into a practical application or amount to significantly more.
The dependent claims have been given the full two-part analysis including analyzing the additional limitations both individually and in combination. The dependent claims, when analyzed individually, and in combination, are also held to be patent ineligible under 35 U.S.C. 101 as they include all of the limitations of claims 1, 13 or 18 respectively. The additional recited limitations of the dependent claims fail to establish that the claims do not recite an abstract idea because the additional recited limitations of the dependent claims merely further narrow the abstract idea. Beyond the limitations which recite the abstract idea, the claims recite additional elements consistent with those identified above with respect to the independent claims which encompass adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f). Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
Dependent claims 2-12, 14-17, 19-22 recite additional subject matter which amounts to additional elements consistent with those identified in the analysis of Claims 1, 13, 18 above. As discussed above with respect to Claims 1, 13, 18 and integration of the abstract idea into a practical application, recitation of these additional elements (e.g., processor circuitry, instructions) only amounts to invoking computers as a tool to perform the abstract idea. Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation.
Dependent claims 2-12, 14-17, 19-22 when analyzed as a whole, are held to be patent ineligible under 35 U.S.C. 101 because the additional recited limitation(s) fail(s) to establish that the claim(s) is/are not directed to an abstract idea without significantly more. These claims fail to remedy the deficiencies of their parent claims above, and are therefore rejected for at least the same rationale as applied to their parent claims above, and incorporated herein.
For the reasons stated, Claims 1-22 fail the Subject Matter Eligibility Test and are consequently rejected under 35 U.S.C. 101.
.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-3, 5, 8-10, 12-14, 16-19, 21, 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Abraham et. al. (US Publication 20230178245A1) in view of Rao et. al. (US Publication 20220130542A1).
Regarding Claim 1, Abraham discloses
An apparatus comprising: memory circuitry; instructions; and processor circuitry to execute the instructions to ([0109]-[0111], teaching on a computing device which includes a processor and memory; the processor can process instructions for execution of instructions; see Fig. 1G):
process input data pulled from a record to form a set of candidate features ([0091] and Fig. 1D teach on a process of generating training data for training a machine learning model to predict effectiveness of a treatment for a disease or disorder of a subject having particular biomarkers; the process may include obtaining from a first data source, a first data structure that includes fields structuring data representing a set of one or more biomarkers associated with the subject and storing the first data structure; obtaining second data structure from a second data source that includes fields structuring data representing outcome data for the subject having the one or more biomarkers and storing the second data; generating a labeled training data structure that includes (i) data representing the one or more biomarkers, (ii) a disease or disorder, (iii) a treatment, and (iv) an effectiveness of treatment for the disease or disorder based on the first data structure and the second data structure - Examiner interprets the data obtained (e.g., biomarker/outcome) to read on “input data pulled from a record” and interprets using the obtained data to generate training data structuring including (i)-(iv) to read on “processing” the input data to form a set of candidate features; Examiner notes that per [0071]-[0072], biomarker data records and outcome data records can be processed to extract data used to train the ML model, e.g., the input data pulled from records)
train at least a first model and a second model using the set of candidate features ([0091] as cited above teaches on obtaining input data and “generating a labeled training data set” includes (i) data representing the one or more biomarkers, (ii) a disease or disorder. (iii) a treatment, and (iv) an effectiveness of treatment for the disease or disorder based on the first data structure and the second data structure; a machine learning model is trained using the generated labeled training data – interpreted as training at least a first model using the set of candidate features; [0093] teaches on using multiple machine learning models; [0095] teaches on using multiple machine learning models 370-0, 370-1, 370-x where x is any non-zero integer greater than 1; [0096] teaches on each machine learning model 370-0, 370-1, 370-x being trained to classify a particular type of input – Examiner interprets the training process described in [0091] to be applicable to all of models 370-0, 370-1, 370-x)
test at least the first model and the second model ([0106] teaches on storing a confidence score for each of machine learning models 370-0, 370-1, 370-x; the score can be adjusted based on whether or not the ML model accurately predicted the subject classification selected during a previous iteration; the stored confidence score for each ML model can provide an indication of the historical accuracy of the respective ML model – Examiner interprets determination of a confidence score to read on “testing” the models);
store at least one of the first model or the second model (See Fig.1 F, which shows that machine learning models 370-0, 370-1, 370-x are hosted on application server 240 comprising memory unit 244 per para. [0065]); and
deploy at least one of the first model or the second model to predict a likelihood of at least one of: a) a toxicity occurring due to immunotherapy according to a treatment plan or b) efficacy of the treatment plan for a patient ([0082] teaches on the trained machine learning mode being capable of predicting, based on an input feature vector representative of a set of one or more biomarkers, a disease/disorder and a treatment, an output of a level of effectiveness for the treatment in treating the disease/disorder of the subject having the biomarkers; [0100] teaches on input data structures 320-0, 320-1, 320-x which include data representing biomarkers of a subject, data describing a disease/disorder associated with the subject, data describing a proposed treatment for the subject, or any of combination; machine learning models 370-0, 370-1, 370-x are used to classify the input data as “corresponding to a subject that is likely to be responsive or likely to be non-responsive to a treatment identified associated by the vector processed by the machine learning level”; the machine learning models generate output data 270-0, 270-1, 270-x, representing whether the subject is likely to be responsive or likely to be unresponsive to a treatment – using the machine learning models to predict whether the patient is likely to be responsive or unresponsive to a treatment by using data pertaining to the patient (e.g., associated with biomarkers, patient disease/disorder and proposed treatment) is interpreted as ‘deploying’ the model(s) for predicting likelihood of efficacy of the treatment for the patient; See Fig 1F; [0094] specifically discloses predicting ‘therapeutic efficacy or lack thereof’ with respect to the invention of Abraham).
Abraham discloses generation and use of multiple machine learning models, but does not explicitly teach the following limitations. Rao, which is directed to using machine learning to assess medical information and predict the probability of a patient responding to a specific treatment, teaches:
test at least a first model and a second model to compare performance of a first model and a second model ([0022] teaches on generating a training dataset, and generating a set of machine learning models using the training dataset; a server computer may initially train a set of machine learning models using the training dataset and then apply or input a validation set into the generated machine learning models to determine which of the machine learning models is most accurate; Examiner interprets “set of models” to indicate Rao includes at least a first model and second model if plural “models” are present; Examiner interprets determining which model is most accurate to indicate the models have been compared such that this determination can be made);
select at least one of the first model or the second model based on the comparison ([0022] teaches on applying or inputting a validation set into the generated machine learning models to determine which of the machine learning models is most accurate and may be used as the final or selected machine learning model – interpreted as “selecting”)
store the selected at least one of the first model or the second model ([0033] teaches on storing the generated machine learning model as part of machine learning data 163 which may be stored in the memory 157; see Fig 1B; Examiner interprets the ‘generated’ machine learning model to include the final/selected machine learning model of [0022]); and
deploy the selected at least one of the first model or the second model to predict a likelihood of [a patient response] to a treatment plan for a patient ([0022] teaches on determining a final/selected machine learning model to be used; [0023] teaches on inputting into the generated machine learning model, a set of input data (indicative of real-world patient data) associated with a set of patients in which the output indicates a set of probabilities of each patient responding to a particular treatment).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify Abraham with these teachings of Rao, to compare first and second models of Abraham in order to select at least one model based on the comparison, store the at least one selected model, and deploy the at least one selected model, with the motivation of selecting the most accurate model from a set of models for more accurately predicting, in advance of administering immunotherapy treatments, which patients are likely to respond (or not respond) to immunotherapy (Rao [0022], [0057]).
Regarding Claim 2, Abraham/Rao teach the limitations of Claim 1. Abraham further discloses further including deploying the selected at least one of the first model or the second model in a tool with an interface to facilitate gathering of patient data and interaction with the selected at least one of the first model or the second model ([0085] teaches on receiving biomarker data at terminal 405 which may be a “user device” of a doctor, employee at a doctor’s office, or other human entity that inputs data; [0089] teaches on using the machine learning model(s) to output a probability that is indicative of a probability of the effectiveness of the treatment terminal 405 generating output on user interface 420 which indicates a predicted level of effectiveness of a treatment for a disease or disorder for a person having particular biomarkers – as the claims and specification do not appear to disclose what a “tool” is, e.g., structure, Examiner interprets the user device as a “tool with an interface”).
Regarding Claim 3, Abraham/Rao teach the limitations of Claim 1. Abraham further discloses wherein the input data includes at least one of laboratory test results, diagnosis code, or billing codes ([0085] teaches on “biomarker data” being derived from laboratory machinery used to perform various assays – interpreted as “laboratory test results”).
Regarding Claim 5, Abraham/Rao teach the limitations of Claim 1. Abraham further discloses wherein the efficacy of the treatment plan for the patient is measured by at least one of patient survival ([0131] teaches on classifying patients are more or less likely to benefit or respond to various treatments, e.g., determining a patient is a responder vs. non-responder; such indication may be determined by using patient response criteria such as progression free survival and disease free survival (both interpreted as reading on “patient survival”) or time on treatment ([0410] teaches on a clinical outcome using end point “time on treatment”).
Regarding Claim 8, Abraham/Rao teach the limitations of Claim 1. Abraham further discloses wherein the processor circuitry is to generate labels for the input data to form the set of candidate features ([0091] teaches on the training process which includes generating a “labeled” training data structure and subsequently training a ML model using the generated labeled training data).
Regarding Claim 9, Abraham/Rao teach the limitations of Claim 1. Abraham further discloses wherein the processor circuitry is to feature engineer the set of candidate features by at least one of normalizing, transforming, or extracting from the set of candidate features ([0053] teaches on the extraction of specific data from incoming data streams for use in generating training data structures, e.g., selection of a specific set of one or more biomarkers for inclusion in the training data structure; certain biomarkers may be selected (interpreted as ‘extracted’) to determine whether a treatment for a disease will be effective; [0072] teaches on using extraction unit to process received biomarker data and outcome data records to extract data that can be used to train the machine learning model; it may perform one or more information extraction algorithms such as keyed data extraction, pattern matching, NLP, or the like to identify and obtain data).
Regarding Claim 10, Abraham/Rao teach the limitations of Claim 9. Abraham further discloses wherein the processor circuitry is to select from the set of candidate features to form a patient feature set to at least one of train or test at least the first model and the second model based on the feature engineering ([0053] teaches on the extraction of specific data from incoming data streams for use in generating training data structures, e.g., selection of a specific set of one or more biomarkers for inclusion in the training data structure; certain biomarkers may be selected (interpreted as ‘extracted’) to determine whether a treatment for a disease will be effective; per [0064] training data structures are used to train a machine learning model to predict effectiveness of a treatment for a disease; [0072] teaches on using extraction unit to process received biomarker data and outcome data records to extract data that can be used to train the machine learning model; it may perform one or more information extraction algorithms such as keyed data extraction, pattern matching, NLP, or the like to identify and obtain data).
Regarding Claim 12, Abraham/Rao teach the limitations of Claim 1. Abraham further discloses wherein the processor circuitry is to deploy the selected at least one of the first model or the second model as an executable tool with an interface ([0123] teaches on implementing computing device 650 in a variety of forms, which may be a smartphone, PDA or similar mobile device; per [0116] computing device 650 includes an input/output device such as a display and communication interface; [0124] teaches on various implementations of the systems and methods of Abraham (e.g., deploying machine learning models) being realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations of such implementations. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which can be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device; Examiner interprets computer software/computer programs that are executable to read on broadest reasonable interpretation of “executable tool”; [0089] teaches on the machine learning model executing on a user terminal which generates output on a user interface to indicate a predicted effectiveness of a treatment).
Regarding Claim 13, Abraham/Rao teach the limitations of Claim 1. Claim 13 recites the same or substantially similar limitations as Claim 1, and the discussion above with respect to Claim 1 is equally applicable to Claim 13. Claim 13 recites the following which is also taught by Abraham: At least one computer-readable storage medium comprising instructions which, when executed by processor circuitry, cause the processor circuitry to at least ([0028] teaching on system architecture including a non-transitory computer readable medium storing software for execution by one or more computers).
Regarding Claim 14 and Claim 19, Abraham/Rao teach the limitations of Claim 2. Claims 14 and 19 recite the same or substantially similar limitations as Claim 2, and the discussion above with respect to Claim 2 is equally applicable to Claims 14 and 19.
Regarding Claim 16 and Claim 21, Abraham/Rao teach the limitations of Claim 8. Claims 16 and 21 recite the same or substantially similar limitations as Claim 8, and the discussion above with respect to Claim 8 is equally applicable to Claims 16 and 21.
Regarding Claim 17 and Claim 22, Abraham/Rao teach the limitations of Claim 9. Claims 17 and 22 recite the same or substantially similar limitations as Claim 9, and the discussion above with respect to Claim 9 is equally applicable to Claims 17 and 22.
Regarding Claim 18, Abraham/Rao teach the limitations of Claim 1. Claim 18 recites the same or substantially similar limitations as Claim 1, and the discussion above with respect to Claim 1 is equally applicable to Claim 18.
Claim(s) 4, 6, 7, 15, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Abraham et. al. (US Publication 20230178245A1) in view of Rao et. al. (US Publication 20220130542A1) as applied to Claims 1, 13, 18 above, and further in view of Wakeland et. al. (US Publication 20210263045A1).
Regarding Claim 4, Abraham/Rao teach the limitations of Claim 1. As this is a conditional limitation of Claim 1, it is not required (e.g., Claim 1 recites “predict a likelihood of at least one of: a) a toxicity occurring due to immunotherapy according to a treatment plan or b) efficacy of the treatment plan for a patient, and Claim 1 has provided prior art citations for “b”); nonetheless, Examiner provides the following reference to teach on the limitations of Claim 4.
Regarding Claim 4, Abraham/Rao do not teach, but Wakeland, which is directed to the prediction and treatment of immunotherapeutic toxicity, teaches: wherein the toxicity includes at least one of pneumonitis, colitis, or hepatitis ([0017] teaches on classifying immunotherapy toxicity based on an organ/system, which may be lung (pneumonitis), gastrointestinal tract (colitis), or liver (hepatitis)).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify the combined teachings of Abraham/Rao with these teachings of Wakeland, to predict a risk of toxicity including pneumonitis, colitis, or hepatitis, because while promising cancer drugs have led to improved outcomes for thousands of patients, they have introduced new safety concerns and can lead to immune-related adverse events which may affect almost every organ system – including liver, lung, and more – and may be severe or even permanent and may necessitate treatment interruption or administration of steroids/immunosuppressive agents (Wakeland [0004]).
Regarding Claim 6, Abraham/Rao teach the limitations of Claim 1 but do not teach the following. Wakeland, which is directed to the prediction and treatment of immunotherapeutic toxicity, teaches: wherein the processor circuitry is to extract and organize the input data in a time series ([0008] teaches on assessing the level of an autoantibody from a subject (interpreted as a ‘first time point’); the method may further comprise assessing autoantibody levels at a second time point – interpreted as a time series of first and second time points; [0188]-[0189] teach on a clinical example of evaluating a therapy; peripheral blood samples were collected from the patient at pre-treatment baseline and multiple post-treatment initiation time-points including toxicity onset. At each time point, 4 tubes of blood were collected; see Fig. 16A which shows “blood collection” in months at top with a time series; paras. [0190]-[0194] provide details of events that occurred within the time series, e.g., after almost 5 months, lab assessment revealed low serum concentrations of ACTH and cortisol – interpreted as input data (biomarkers) which are extracted and organized in a time series as shown in Fig 16).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify the combined teachings of Abraham/Rao with these teachings of Wakeland, to extract and organize the input data of Abraham/Rao in a time series, in order to assess a change in immunotherapeutic toxicity risk over time (Wakeland [0009]).
Regarding Claim 7, Abraham/Rao/Wakeland teach the limitations of Claim 6. Wakeland further teaches: wherein the processor circuitry is to align the input data with respect to an anchor point to organize the input data in the time series (See Fig. 16A; “BL” (Baseline) is interpreted as the “anchor point” from which the rest of the data points are organized (“aligned”) to create a time series as shown in Fig. 16A; [0188]-[0189] teach on a clinical example of evaluating a therapy; peripheral blood samples were collected from the patient at pre-treatment baseline and multiple post-treatment initiation time-points including toxicity onset. At each time point, 4 tubes of blood were collected – interpreted as the input data as time series with respect to baseline/anchor).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify the combined teachings of Abraham/Rao with these teachings of Wakeland, to align the input data of Abraham/Rao in a time series with an anchor point, in order to assess a change in immunotherapeutic toxicity risk over time (Wakeland [0009]).
Regarding Claim 15, Abraham/Rao teach the limitations of Claim 1 but do not teach the following. Wakeland, which is directed to the prediction and treatment of immunotherapeutic toxicity, teaches: wherein the instructions, when executed, cause the processor circuitry to extract and organize the input data in a time series with respect to an anchor point ([0008] teaches on assessing the level of an autoantibody from a subject (interpreted as a ‘first time point’); the method may further comprise assessing autoantibody levels at a second time point – interpreted as a time series of first and second time points; [0188]-[0189] teach on a clinical example of evaluating a therapy; peripheral blood samples were collected from the patient at pre-treatment baseline and multiple post-treatment initiation time-points including toxicity onset. At each time point, 4 tubes of blood were collected; see Fig. 16A which shows “blood collection” in months at top with a time series; paras. [0190]-[0194] provide details of events that occurred within the time series, e.g., after almost 5 months, lab assessment revealed low serum concentrations of ACTH and cortisol – interpreted as input data (biomarkers) which are extracted and organized in a time series as shown in Fig 16A; See Fig. 16A; “BL” (Baseline) is interpreted as the “anchor point” from which the rest of the data points are organized (“aligned”) to create a time series as shown in Fig. 16A; [0188]-[0189] teach on a clinical example of evaluating a therapy; peripheral blood samples were collected from the patient at pre-treatment baseline and multiple post-treatment initiation time-points including toxicity onset. At each time point, 4 tubes of blood were collected)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify the combined teachings of Abraham/Rao with these teachings of Wakeland, to extract and organize the input data of Abraham/Rao in a time series with respect to an anchor point, in order to assess a change in immunotherapeutic toxicity risk over time (Wakeland [0009]).
Regarding Claim 20, Abraham/Rao/Wakeland teach the limitations of Claim 15. Claim 20 recites the same or substantially similar limitations as Claim 15, and the discussion above with respect to Claim 15 is equally applicable to Claim 20.
Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Abraham et. al. (US Publication 20230178245A1) in view of Rao et. al. (US Publication 20220130542A1) as applied to Claim 9 above, and further in view of Malhotra et. al. (US Publication 20180211010A1).
Regarding Claim 11, Abraham/Rao teach the limitations of Claim 9. Abraham further discloses wherein the processor circuitry is to generate [a] features to at least one of train or test at least the first model and the second model based on the feature engineering ([0053] teaches on the extraction of specific data from incoming data streams for use in generating training data structures, e.g., selection of a specific set of one or more biomarkers for inclusion in the training data structure; certain biomarkers may be selected (interpreted as ‘extracted’ which reads on “feature engineering” of candidate features); [0059] teaches on ML model receiving an input of training data item and processing the input training data item to generate an output; the input training data item may include a plurality of features or independent variables “X” and a training label (Dependent variable “Y).
Abraham/Rao do not explicitly disclose generating a feature matrix to train a machine learning model, but Malhotra, which is directed to a machine learning pipeline for predicting refractory epilepsy status, teaches generating a feature matrix to train a machine learning model ([0070] teaches on selecting features to include in a feature matrix for building and training a predictive model).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify the combined teachings of Abraham/Rao with these teachings of Malhotra, to generate a feature matrix to train the selected first/second model of Abraham/Rao, with the motivation of using the feature matrix to represent each patient with a feature vector used to train a model (Malhotra [0070]/Fig. 5).
Conclusion
Examiner respectfully requests that Applicant provides citations to relevant paragraphs of specification for support for amendments in future correspondence.
The following relevant prior art not cited is made of record:
US Publication 20220028551A1, teaching on predicting response to immunotherapy treatment using deep learning analysis of imaging and clinical data
US Publication 20190362846A1, teaching on a system and method for an automated clinical decision support system which trains multiple models and compares testing results to select an optimal model to use
US Publication 20180226153A1, teaching on using a predictive model for predicting treatment-regimen-related outcomes including efficacy and toxicity
US Publication 20240006080A1, teaching on techniques for generating predictive outcomes relating to oncological lines of therapy using artificial intelligence
US Publication 20210151187A1, teaching on selection of a predictive model based on model performance
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANNE-MARIE K ALDERSON whose telephone number is (571)272-3370. The examiner can normally be reached on Mon-Fri 9:00am-5:00pm EST and generally schedules interviews in the timeframe of 2:00-5:00pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fonya Long, can be reached on 571-270-5096. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANNE-MARIE K ALDERSON/Primary Examiner, Art Unit 3682