Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
Claims 1-20 are currently pending.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on August 25, 2025 has been entered.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation is: the “validation module” recited in Claims 1-7 and 15-20.
Additionally, Examiner notes that the validation module recited in Claims 8-14 is not interpreted under 35 U.S.C. 112(f) because Claim 8 specifically recites that the functions of the validation module are executed by “one or more computing devices,” e.g. see lines 1-3 of Claim 8, and “computing devices” are interpreted as sufficient structure/hardware to perform the functions of the validation module. In contrast, Claims 1 and 15 do not explicitly recite what structure/hardware the validation module is embodied as, and hence the validation module recited in Claims 1-7 and 15-20 is interpreted under 35 U.S.C. 112(f).
Because this claim limitation is being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it is being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this limitation interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation to avoid it being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation recites sufficient structure to perform the claimed function so as to avoid it being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 7, 14, and 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding Claims 7, 14, and 20, Claims 7, 14, and 20 recite “rejecting the patient health prediction request from a remote device.” It is unclear if this “a” remote device is a different/distinct remote device from the “a” remote device previously recited in independent Claims 1, 8, and 15 (from which dependent Claims 7, 14, and 20 respectively depend from). In the interest of compact prosecution, Examiner will interpret Claims 7, 14, and 20 as reciting “rejecting the patient health prediction request from the remote health device.” Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Step 1
Claims 1-20 are within the four statutory categories. Claims 1-7 are drawn to a method for providing health predictions, which is within the four statutory categories (i.e. process). Claims 8-14 are drawn to a non-transitory medium for providing health predictions, which is within the four statutory categories (i.e. manufacture). Claims 15-20 are drawn to a system for providing health predictions, which is within the four statutory categories (i.e. machine).
Prong 1 of Step 2A
Claim 1, which is representative of the inventive concept, recites: A method comprising:
receiving, via a server, a patient health prediction request from a remote device, the request comprising patient data, the patient data comprising one or more of patient identification, patient visit timing data, patient laboratory order data, patient laboratory order timing data, patient laboratory observation data, patient laboratory observation timing data, and patient vitals;
parsing, by the server, the received patient data to determine whether required input fields are present in the patient data, and determining, by the server, whether to deploy one or more health prediction artificial intelligence medical devices, where the one or more health prediction artificial intelligence medical devices have input fields matching input fields present in the patient data; and
using a validation module that improves accuracy of predictions wherein the validation module evaluates an age of the received patient health data, and rejects a patient health prediction request from a remote device if the received patient data is older than a predetermined time value, otherwise performing the operations of:
deploying, via the server, the one or more matched health prediction artificial intelligence medical devices with the patient data in the request, wherein each of the one or more matched health prediction artificial intelligence medical devices use a machine learning model trained to determine a patient health prediction, and wherein each of the one or more matched health prediction artificial intelligence medical devices are deployed as an impermanent process that accesses a memory;
providing, via the server, the patient data to input fields of the deployed one or more matched health prediction artificial intelligence medical devices;
generating by the one or more deployed matched health prediction artificial intelligence devices, one or more patient health predictions based on the patient data provided to the input fields of the matched artificial intelligence medical device, and removing any patient data from the memory; and
providing, via the server, the one or more patient health predictions generated by the one or more deployed matched health prediction artificial intelligence devices to the remote device.
The underlined limitations as shown above, given the broadest reasonable interpretation, cover the abstract ideas of a mental process and/or a certain method of organizing human activity because they recite a process that could be practically performed in the human mind (i.e. observations, evaluations, judgments, and/or opinions – in this case, the steps of receiving a request including patient data, parsing the patient data to determine whether certain data is present, determining whether to deploy prediction models based on the presence of the data, evaluating the age of the received data in order to determine whether to deploy the models, inputting the data into the deployed models, and generating patient health predictions as a result of the patient data being input into the models are reasonably interpreted as processes that could be performed mentally) or using a pen and paper, but for the recitation of generic computer components (i.e. the server and validation module), and/or managing personal behavior or relationships or interactions between people (i.e. social activities, teaching, and following rules or instructions – in this case, the steps of receiving a patient health prediction request including one or more types of data, parsing the request, determining a health prediction medical device that includes input fields matching the data in the request, deploying the matching health prediction medical devices, providing the patient data to input fields of the deployed matching health prediction medical devices, generating health predictions based on the provided patient data, and providing the patient health prediction are reasonably interpreted as rules or instructions a medical provider may follow in order to determine and provide health predictions for a patient), e.g. see MPEP 2106.04(a)(2). Any limitations not identified above as part of the abstract idea are deemed “additional elements,” and will be discussed in further detail below.
Furthermore, the abstract idea for Claims 8 and 15 is identical as the abstract idea for Claim 1, because the only difference between Claims 1, 8, and 15 is that Claim 1 recites a method, whereas Claim 8 recites a non-transitory medium and Claim 15 recites a system.
Dependent Claims 2-7, 9-14, and 16-20 include other limitations, for example Claims 2, 9, and 16 recite utilizing positive and negative training data to iteratively train the patient artificial intelligence medical device, and excluding inputs that do not improve performance of the patient artificial intelligence medical device, Claim 3, 10, and 17 recite a type of request and types of patient data, Claims 4-6, 11-13, and 18-19 recite types of inputs into the health prediction artificial intelligence medical devices, and Claims 7, 14, and 20 recite evaluating the patient data against predetermined data requirements, but these only serve to further narrow the abstract idea, and a claim may not preempt abstract ideas, even if the judicial exception is narrow, e.g. see MPEP 2106.04, and/or do not further narrow the abstract idea and instead only recite additional elements, which will be further addressed below. Hence dependent Claims 2-7, 9-14, and 16-20 are nonetheless directed towards fundamentally the same abstract idea as independent Claims 1, 8, and 15.
Prong 2 of Step 2A
Claims 1, 8, and 15 are not integrated into a practical application because the additional elements (i.e. the non-underlined limitations above – in this case, the artificial intelligence health prediction medical device, the validation module, the remote device, and the step of deploying the matched health prediction artificial intelligence medical devices) amount to no more than limitations which:
amount to mere instructions to apply an exception – for example, the recitation of the medical device server, the validation module, and the recitation of the medical devices being deployed as impermanent processes accessing a memory, which amounts to merely invoking a computer as a tool to perform the abstract idea, e.g. see [0018]-[0021] of the as-filed Specification, see MPEP 2106.05(f);
generally link the abstract idea to a particular technological environment or field of use – for example, the claim language of a health prediction artificial intelligence medical device, which amounts to limiting the abstract idea to the field of healthcare and/or artificial intelligence, see MPEP 2106.05(h); and/or
add insignificant extra-solution activity to the abstract idea – for example, the recitation of rejecting data that is older than a predetermined time value and utilizing data that is not older than the predetermined time value, which amounts to selecting a particular data source or type of data to be manipulated, and the recitation of removing the patient data from memory, which amounts to an insignificant application, see MPEP 2106.05(g).
Additionally, dependent Claims 2-7, 9-14, and 16-20 include other limitations, but these limitations also amount to mere instructions to apply an exception (i.e. the remote device recited by dependent Claims 7, 14, and 20), generally linking the abstract idea to a particular technological environment or field of use (e.g. the types of data recited in dependent Claims 3-6, 10-13, and 17-19), and/or adding insignificant extra-solution activity to the abstract idea (e.g. the training steps recited in dependent Claims 2, 9, and 16), and/or do not include any additional elements beyond those already recited in independent Claims 1, 8, and 15, and hence also do not integrate the aforementioned abstract idea into a practical application.
Hence Claims 1-20 do not include additional elements that integrate the judicial exceptions into a practical application.
Step 2B
Claims 1, 8, and 15 do not include additional elements that are sufficient to amount to “significantly more” than the judicial exceptions because the additional elements (i.e. the non-underlined limitations above – in this case, the artificial intelligence health prediction medical device, the validation module, the remote device, and the step of deploying the matched health prediction artificial intelligence medical devices), as stated above, are directed towards no more than limitations that amount to mere instructions to apply the exception, generally link the abstract idea to a particular technological environment or field of use, and/or add insignificant extra-solution activity to the abstract idea, wherein the additional elements comprise limitations which:
amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields, as demonstrated by:
The Specification expressly disclosing that the additional elements are well-understood, routine, and conventional in nature:
[0018]-[0021], [0038], and [0054] of the as-filed Specification discloses that the additional elements (i.e. the structural limitations that execute the health prediction artificial intelligence medical devices, the fact that the health prediction artificial intelligence medical devices are impermanent processes accessing a memory, the remote device, and the validation module) comprise a plurality of different types of generic computing systems that are configured to perform generic computer functions (i.e. receiving data, processing the received data, outputting a result of the processing, removing the data after the processing has completed) that are well-understood, routine, and conventional activities previously known to the pertinent industry (i.e. healthcare);
Relevant court decisions: The following are examples of court decisions demonstrating well-understood, routine and conventional activities, e.g. see MPEP 2106.05(d)(II):
Receiving or transmitting data over a network, e.g. see Intellectual Ventures v. Symantec – similarly, the current invention receives patient data over a network, e.g. see [0055]-[0058] of the as-filed Specification;
Storing and retrieving information in memory, e.g. see Versata Dev. Group, Inc. v. SAP Am., Inc. – similarly, the current invention recites storing the health prediction artificial intelligence medical devices in a memory as impermanent processes, and retrieving the health prediction artificial intelligence medical devices data from storage in order to determine the patient health predictions;
Performing repetitive calculations, e.g. see Parker v. Flook, and/or Bancorp Services v. Sun Life – similarly, the current invention performs basic calculations (i.e. deploying the matched health prediction artificial intelligence medical devices to determine one or more patient health predictions, determining whether to use the data in the artificial intelligence medical devices based on the age of the data) and does not impose meaningful limits on the scope of the claims.
Dependent Claims 2-7, 9-14, and 16-20 include other limitations, but none of these limitations are deemed significantly more than the abstract idea because the additional elements recited in the aforementioned dependent claims similarly amount to mere instructions to apply an exception (i.e. the remote device recited by dependent Claims 7, 14, and 20), generally linking the abstract idea to a particular technological environment or field of use (e.g. the types of data recited dependent Claims 3-6, 10-13, and 17-19), generic structural elements performing generic functions (e.g. the training steps recited in dependent Claims 2, 9, and 16), and/or the limitations recited by the dependent claims do not recite any additional elements not already recited in independent Claims 1, 8, and 15, and hence do not amount to “significantly more” than the abstract idea.
Hence, Claims 1-20 do not include any additional elements that amount to “significantly more” than the judicial exception(s).
Thus, taken alone, the additional elements do not amount to significantly more than the abstract idea identified above. Furthermore, looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually, and there is no indication that the combination of elements improves the functioning of a computer or improves any other technology, and their collective functions merely provide conventional computer implementation.
Therefore, whether taken individually or as an ordered combination, Claims 1-20 are nonetheless rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 4, 8, 11, 15, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Shimizu (US 2005/0021375) in view of Lee (US 2014/0101080), further in view of Okusu (US 2014/0287718) and Guo (US 2019/0138692).
Regarding Claim 1, Shimizu teaches the following: A method comprising:
receiving, via a server, a patient health prediction request from a remote device, the request comprising patient data, the patient data comprising one or more of patient identification, patient visit timing data, patient laboratory order data, patient laboratory order timing data, patient laboratory observation data, patient laboratory observation timing data, and patient vitals (The system includes a server that receives a request for a diagnosis (i.e. a patient health prediction request) from a doctor terminal (i.e. a remote device), wherein the diagnosis request form includes patient data such as patient name (i.e. identification) and blood pressure (i.e. vitals), e.g. see Shimizu [0042], Figs. 1 and 3.); and
providing, via the server, one or more patient health predictions to the remote device (The requesting doctor submits the request to the server, wherein the server transmits the request to a diagnosing doctor, and in response the diagnosing doctor generates a diagnosis report (i.e. a patient health prediction), e.g. see Shimizu [0054]-[0056], Fig. 3. Additionally, the diagnosing doctor transmits the diagnosis report to the server, and the server further forwards the diagnosis report to the requesting doctor (i.e. the remote device), e.g. see Shimizu [0069]-[0071], Fig. 3.).
But Shimizu does not teach and Lee teaches the following:
parsing, by the server, the received patient data to determine whether required input fields are present in the patient data, and determining, by the server, whether to deploy one or more health prediction artificial intelligence medical devices, where the one or more health prediction artificial intelligence medical devices have input fields matching input fields present in the patient data (The system analyzes (i.e. parses) patient information, and utilizes a model selection unit to search for and find one or more categorized diagnostic models (i.e. health prediction artificial intelligence medical devices) based on matching patient information categories (i.e. input fields matching input fields present in the patient data), e.g. see Lee [0054]-[0055], wherein the diagnostic models are learned using a machine learning algorithm such as an artificial neural network, e.g. see Lee [0066].);
deploying, via the server, the one or more matched health prediction artificial intelligence medical devices with the patient data in the request (The system includes a diagnosis unit that utilizes the selected diagnostic model to perform an initial diagnosis of a lesion, e.g. see Lee [0056].), wherein each of the one or more matched health prediction artificial intelligence medical devices use a machine learning model trained to determine a patient health prediction (The model learning unit generates the diagnostic models by learning (i.e. training) categorized learning data, e.g. see Lee [0066], wherein the diagnostic models are used to determine a diagnosis (i.e. a patient health prediction), e.g. see Lee [0056].), and wherein each of the one or more matched health prediction artificial intelligence medical devices are deployed as an impermanent process that accesses a memory (Each of the categorized diagnostic models may be stored in memory, e.g. see Lee [0069], and the models perform the diagnosis and end once it is determined that the user has not changed the diagnostic model selection, e.g. see Lee [0104], Fig. 9. That is, the process is “impermanent” in that it has a defined, discrete start point, and a defined, discrete end point, rather than, for example, a process that is constantly running.);
providing, via the server, the patient data to input fields of the deployed one or more matched health prediction artificial intelligence medical devices (The matched diagnostic models analyze the patient information (i.e. the patient data is provided to the matched model), and generates a diagnosis based on the analysis, e.g. see Lee [0056] and [0100]-[0101].);
generating by the one or more deployed matched health prediction artificial intelligence devices, one or more patient health predictions based on the patient data provided to the input fields of the matched artificial intelligence medical device (The system generates a diagnosis (i.e. a patient health prediction) based on the diagnostic model’s analysis of the patient data, e.g. see Lee [0056] and [0100]-[0101].); and
wherein the one or more patient health predictions provided to the remote device are one or more patient health predictions generated by the one or more deployed matched health prediction artificial intelligence devices (The diagnosis unit may display (i.e. provide) the results of the diagnosis (i.e. a patient health prediction) obtained based on the diagnostic models (i.e. the one or more deployed matched health prediction artificial intelligence devices), e.g. see Lee [0056].).
Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of healthcare to modify Shimizu to incorporate the model selection and diagnosis as taught by Lee in order to increase a more accurate and precise diagnosis, e.g. see Lee [0006] and [0057].
But the combination of Shimizu and Lee does not teach and Okusu teaches the following:
wherein the method further comprises removing any patient data from the memory (The system includes a server apparatus that receives patient information from a patient database and transmits the patient data to a physician device such that the patient information may be used to generate a diagnosis, e.g. see Okusu [0034], Fig. 1. Additionally, the server apparatus transmits a cancellation signal to the physician device that deletes the patient information, e.g. see Okusu [0035] and [0054].).
Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of healthcare to modify the combination of Shimizu and Lee to incorporate the deleting the patient data as taught by Okusu in order to prevent information leakage of medical charts, e.g. see Okusu [0035].
But the combination of Shimizu, Lee, and Okusu does not teach and Guo teaches the following:
wherein the steps of deploying the matched health prediction artificial intelligence medical devices, providing the patient data to input fields, generating the one or more patient health predictions, and providing the one or more health predictions are performed using a validation module that improves accuracy of predictions wherein the validation module evaluates an age of the received patient health data, and rejects a patient health prediction request from a remote device if the received patient data is older than a predetermined time value (The system generates patient predictions, for example predicting a probability of a patient having a heart attack, e.g. see Guo [0032], wherein the system enables the weighting of the data used to generate the prediction such that the weighted representations of the data are more accurate, e.g. see Guo [0038]. For example, the system enables a user to weight a phenotype feature by filtering the data used for predictions based on a time decay component that filters out data that is too old, e.g. see Guo [0039]-[0042].).
Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of healthcare to modify the combination of Shimizu, Lee, and Okusu to incorporate the filtering out data used in predictions based on the age of data as taught by Guo in order to enable the system to factor in a sequence of events and relevance of the data to the outcome in determining the prediction, e.g. see Guo [0042].
Regarding Claim 4, the combination of Shimizu, Lee, Okusu, and Guo teaches the limitations of Claim 1, and Lee further teaches the following:
The method of claim 1, wherein the input fields of the matched health prediction artificial intelligence medical devices have a selected set of values and determining the matched health prediction artificial intelligence medical devices comprises determining whether the patient data in the request comprises the selected set of values (The patient information categories used to determine the selection of the diagnostic models (i.e. whether the patient data comprises the selected set of values) includes data that may be quantified as feature values (i.e. a selected set of values), for example lesion feature values, e.g. see Lee [0047] and [0054]-[0056].).
Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of healthcare to modify the combination of Shimizu, Okusu, and Guo to incorporate utilizing the feature values to determine which model to select as taught by Lee in order to increase a more accurate and precise diagnosis, e.g. see Lee [0006] and [0057].
Regarding Claim 7, the combination of Shimizu, Lee, Okusu, and Guo teaches the limitations of Claim 6, and Guo further teaches the following:
The method of claim 1, wherein the validation module performs a plurality of validation rules that determine whether the received patient data meets predetermined data requirements, and if the predetermined data requirements are not met then rejecting the patient health prediction request from a remote device (The weighting of the data used to generate the prediction may be a weighted phenotype feature that is filtered based on a plurality of factors including the relevancy of the data and/or a time decay component that filters out data that is too old, e.g. see Guo [0034]-[0042].).
Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of healthcare to modify the combination of Shimizu, Lee, and Okusu to incorporate the weighting of the data used in predictions based on a plurality of various factors as taught by Guo in order to enable the system to factor in a sequence of events and relevance of the data to the outcome in determining the prediction, e.g. see Guo [0042].
Regarding Claims 8 and 15, the limitations of Claims 8 and 15 are substantially similar to those claimed in Claim 1, with the sole difference being that Claim 1 recites a method, whereas Claim 8 recites a non-transitory computer storage, and Claim 15 recites a system. Specifically pertaining to Claims 8 and 15, Examiner notes that Lee teaches a method, system, and non-transitory computer readable medium, e.g. see Lee [0003] and [0112]-[0113], and it would have been obvious to one ordinarily skilled in the art of healthcare to modify the combination of Shimizu, Okusu, and Guo to incorporate the system and non-transitory medium embodiments as taught by Lee in order to enable multiple modalities to produce accurate and precise diagnoses, e.g. see Lee [0006] and [0057], and hence the grounds of rejection provided above for Claim 1 are similarly applied to Claims 8 and 15.
Regarding Claims 11 and 18, the limitations of Claims 11 and 18 are substantially similar to those claimed in Claim 4, with the sole difference being that Claim 4 recites a method, whereas Claim 11 recites a non-transitory computer storage, and Claim 18 recites a system. Specifically pertaining to Claims 11 and 18, Examiner notes that Lee teaches a method, system, and non-transitory computer readable medium, e.g. see Lee [0003] and [0112]-[0113], and it would have been obvious to one ordinarily skilled in the art of healthcare to modify the combination of Shimizu, Okusu, and Guo to incorporate the system and non-transitory medium embodiments as taught by Lee in order to enable multiple modalities to produce accurate and precise diagnoses, e.g. see Lee [0006] and [0057], and hence the grounds of rejection provided above for Claim 4 are similarly applied to Claims 11 and 18.
Regarding Claims 14 and 20, the limitations of Claims 14 and 20 are substantially similar to those claimed in Claim 7, with the sole difference being that Claim 7 recites a method, whereas Claim 14 recites a non-transitory computer storage, and Claim 20 recites a system. Specifically pertaining to Claims 14 and 20, Examiner notes that Lee teaches a method, system, and non-transitory computer readable medium, e.g. see Lee [0003] and [0112]-[0113], and it would have been obvious to one ordinarily skilled in the art of healthcare to modify the combination of Shimizu, Okusu, and Guo to incorporate the system and non-transitory medium embodiments as taught by Lee in order to enable multiple modalities to produce accurate and precise diagnoses, e.g. see Lee [0006] and [0057], and hence the grounds of rejection provided above for Claim 7 are similarly applied to Claims 14 and 20.
Claims 2, 9, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Shimizu, Lee, Okusu, and Guo in view of Virkar (US 2010/0063948).
Regarding Claim 2, the combination of Shimizu, Lee, Okusu, and Guo teaches the limitations of Claim 1, and Lee further teaches the following:
The method of claim 1, wherein the trained machine learning model is trained to classify the patient data and provide the patient health prediction (The selected models include models that are trained utilizing learning data, for example utilizing an artificial neural network, e.g. see Lee [0066], wherein the models are ultimately utilized to produce a diagnosis (i.e. a patient health prediction), e.g. see Lee [0056].).
Therefore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of healthcare to modify the combination of Shimizu, Okusu, and Guo to incorporate the machine learning model as taught by Lee in order to increase a more accurate and precise diagnosis, e.g. see Lee [0006] and [0057].
But the combination of Shimizu, Lee, Okusu, and Guo does not teach and Virkar teaches the following:
wherein training comprises:
obtaining a positive group training data from a patient population who have received a positive diagnosis of a disease or condition of interest (The system includes a training data set including positive samples of sick patients, e.g. see Virkar [0080], [0082], [0103], and [0114].);
obtaining a negative group training data from a patient population who have not received a positive diagnosis of the disease or condition of interest (The system includes a training data set including negative samples of healthy patients, e.g. see Virkar [0080], [0082], [0103], and [0114].);
iteratively training the machine learning model with the positive and negative groups training data as sets of input features to the machine learning model (The system includes support vector machines (SVMs) which are trained in an iterative process such that features are ranked and least important features are eliminated, e.g. see Virkar [0124].);
determining the machine learning model performance for each set of input features (The performance of each list of features for the SVM is reviewed, for example based on various criteria such as leave out error rate and divergence between classes, wherein the best feature list is ultimately used to train a final SVM, e.g. see Virkar [0124].); and
excluding, from the machine learning model, sets of input features that do not improve performance of the machine learning model (The master training data set may be altered by removing noise or irrelevant features (i.e. features that do not improve performance of the machine learning model), e.g. see Virkar [0109].).
Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of healthcare to modify the combination of Shimizu, Lee, Okusu, and Guo to incorporate the training steps as taught by Virkar in order to optimize the training of the learning machines, e.g. see Virkar [0109].
Regarding Claims 9 and 16, the limitations of Claims 9 and 16 are substantially similar to those claimed in Claim 2, with the sole difference being that Claim 2 recites a method, whereas Claim 9 recites a non-transitory computer storage, and Claim 16 recites a system. Specifically pertaining to Claims 9 and 16, Examiner notes that Lee teaches a method, system, and non-transitory computer readable medium, e.g. see Lee [0003] and [0112]-[0113], and it would have been obvious to one ordinarily skilled in the art of healthcare to modify the combination of Shimizu, Okusu, and Guo to incorporate the system and non-transitory medium embodiments as taught by Lee in order to enable multiple modalities to produce accurate and precise diagnoses, e.g. see Lee [0006] and [0057], and hence the grounds of rejection provided above for Claim 2 are similarly applied to Claims 9 and 16.
Claims 3, 10, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Shimizu, Lee, Okusu, and Guo in view of Nolan (US 2008/0104615) and Lenox (US 2014/0153795).
Regarding Claim 3, the combination of Shimizu, Lee, Okusu, and Guo teaches the limitations of Claim 1, but does not teach and Nolan teaches the following:
The method of claim 1, wherein the request comprises an API request (The system receives a request from an entity, wherein the request may be for an entry or modification of data, for example a medical diagnosis, wherein the request may comprise an API request, e.g. see Nolan [0026].).
Therefore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of healthcare to modify the combination of Shimizu, Lee, Okusu, and Guo to incorporate the API request as taught by Nolan in order to provide a layer of security to allow only authorized request components to access certain data, e.g. see Nolan [0023].
But the combination of Shimizu, Lee, Okusu, Guo, and Nolan does not teach and Lenox teaches the following:
wherein the patient data comprises one or more of: BMP, liver function test (LFT), and CBC with differential (The system includes a process used by clinicians to diagnose a disease, wherein the process includes multiple inputs including data such as metabolic data and diagnostic test data such as a complete blood count (CBC), e.g. see Lenox [0023] and [0030], Fig. 1.).
Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of healthcare to modify the combination of Shimizu, Lee, Okusu, Guo, and Nolan to incorporate the metabolic and CBC data for diagnosis as taught by Lenox in order to perform the weighting and decision making process in diagnosing a patient in a quicker and more accurate manner as more information becomes available, e.g. see Lenox [0036].
Regarding Claims 10 and 17, the limitations of Claims 10 and 17 are substantially similar to those claimed in Claim 3, with the sole difference being that Claim 3 recites a method, whereas Claim 10 recites a non-transitory computer storage, and Claim 17 recites a system. Specifically pertaining to Claims 10 and 17, Examiner notes that Lee teaches a method, system, and non-transitory computer readable medium, e.g. see Lee [0003] and [0112]-[0113], and it would have been obvious to one ordinarily skilled in the art of healthcare to modify the combination of Shimizu, Okusu, Guo, and Nolan to incorporate the system and non-transitory medium embodiments as taught by Lee in order to enable multiple modalities to produce accurate and precise diagnoses, e.g. see Lee [0006] and [0057], and hence the grounds of rejection provided above for Claim 3 are similarly applied to Claims 10 and 17.
Claims 5 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Shimizu, Lee, Okusu, and Guo in view of Valdes (US 2018/0150599).
Regarding Claim 5, the combination of Shimizu, Lee, Okusu, and Guo teaches the limitations of Claim 1, and Lee further teaches the following:
The method of claim 1, wherein the input fields of the matched health prediction artificial intelligence medical devices have a selected set of values and determining the matched health prediction artificial intelligence medical devices comprises determining whether the patient data comprises the selected set of values (The patient information categories used to determine the selection of the diagnostic models (i.e. whether the patient data comprises the selected set of values) includes data that may be quantified as feature values (i.e. a selected set of values), for example lesion feature values, e.g. see Lee [0047] and [0054]-[0056].).
Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of healthcare to modify the combination of Shimizu, Okusu, and Guo to incorporate utilizing the feature values to determine which model to select as taught by Lee in order to increase a more accurate and precise diagnosis, e.g. see Lee [0006].
However, the combination of Shimizu, Lee, Okusu, and Guo does not teach and Valdes teaches the following:
wherein the method further comprises: when determining the patient data does not comprise the selected set of values, responding to the request with a status code indicating the missing selected set of values (The system assigns status values to fields of a patient record, wherein the statuses include an indication that data is missing or incomplete, e.g. see Valdes [0036]-[0039].).
Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of healthcare to modify the combination of Shimizu, Lee, Okusu, and Guo to incorporate the missing data determination as taught by Valdes in order to ensure that the data has correct and accurate data, e.g. see Valdes [0039].
Regarding Claim 12, the limitations of Claim 12 are substantially similar to those claimed in Claim 5, with the sole difference being that Claim 5 recites a method, whereas Claim 12 recites a non-transitory computer storage. Specifically pertaining to Claim 12, Examiner notes that Lee teaches a method, system, and non-transitory computer readable medium, e.g. see Lee [0003] and [0112]-[0113], and it would have been obvious to one ordinarily skilled in the art of healthcare to modify the combination of Shimizu, Okusu, Guo, and Valdes to incorporate the system and non-transitory medium embodiments as taught by Lee in order to enable multiple modalities to produce accurate and precise diagnoses, e.g. see Lee [0006] and [0057], and hence the grounds of rejection provided above for Claim 5 are similarly applied to Claim 12.
Claims 6, 13, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Shimizu, Lee, Okusu, and Guo in view of Thalmeier (US 2008/0154100).
Regarding Claim 6, the combination of Shimizu, Lee, Okusu, and Guo teaches the limitations of Claim 1, and Lee further teaches the following:
The method of claim 1, wherein the input fields of the matched health prediction artificial intelligence medical devices comprise selected values, and determining the matched health prediction artificial intelligence devices comprise determining whether patient data comprises the selected values (The patient information categories used to determine the selection of the diagnostic models (i.e. whether the patient data comprises the selected set of values) includes data that may be quantified as feature values (i.e. a selected set of values), for example lesion feature values, e.g. see Lee [0047] and [0054]-[0056].).
Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of healthcare to modify the combination of Shimizu, Okusu, and Guo to incorporate utilizing the feature values to determine which model to select as taught by Lee in order to increase a more accurate and precise diagnosis, e.g. see Lee [0006].
But the combination of Shimizu, Lee, Okusu, and Guo does not teach and Thalmeier teaches the following:
wherein the selected values have corresponding selected ranges of the selected values, and wherein determining the matched health prediction artificial intelligence devices comprise determining whether patient data comprises the selected values and the corresponding selected ranges for a health prediction artificial intelligence medical device. (The system selects an algorithm to utilize for patient data, wherein the selection is performed based on the categorization of the patient data, wherein the categorization may be based on a continuous data range, for example the patient’s age, e.g. see Thalmeier [0009].).
Furthermore, before the effective filing date, it would have been obvious to one ordinarily skilled in the art of healthcare to modify the combination of Shimizu, Lee, Okusu, and Guo to incorporate selecting an algorithm based on the patient data range as taught by Thalmeier in order to provide a more reliable diagnostic interpretation, e.g. see Thalmeier [0006].
Regarding Claims 13 and 19, the limitations of Claims 13 and 19 are substantially similar to those claimed in Claim 6, with the sole difference being that Claim 6 recites a method, whereas Claim 13 recites a non-transitory computer storage, and Claim 19 recites a system. Specifically pertaining to Claims 13 and 19, Examiner notes that Lee teaches a method, system, and non-transitory computer readable medium, e.g. see Lee [0003] and [0112]-[0113], and it would have been obvious to one ordinarily skilled in the art of healthcare to modify the combination of Shimizu, Okusu, Guo, and Thalmeier to incorporate the system and non-transitory medium embodiments as taught by Lee in order to enable multiple modalities to produce accurate and precise diagnoses, e.g. see Lee [0006] and [0057], and hence the grounds of rejection provided above for Claim 6 are similarly applied to Claims 13 and 19.
Response to Arguments
Applicant’s arguments, see Remarks, filed August 25, 2025, with respect to the rejections of Claims 1-20 under 35 U.S.C. 101 have been fully considered but are not persuasive.
Applicants allege that the currently claimed invention is patent eligible because it provides significantly more than an abstract idea, specifically in that it optimizes processing that reduces the resources needed to instantiate an AI medical device by reducing unnecessary computational processing by vetting data prior to processing it via the AI device, e.g. see pg. 12 of Remarks – Examiner disagrees.
The amended feature rejecting data that is beyond a certain age and otherwise processing the data in order to generate and provide the patient health predictions does not represent an improvement to the processing operations of the AI medical device and/or the computer itself because this step does not change the function of the AI medical device and/or the computer, but instead merely performs the same operations but with a particular and/or smaller dataset. That is, rather than improving the processing steps of the AI medical device, for example by altering the particular data pro