Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Examiner’s Note
The Examiner encourages Applicant to schedule an interview to discuss issues related to, for example, the rejections noted below under 35 U.S.C § 112, § 101 and § 102/103, for moving forward allowance.
Providing supporting paragraph(s) for each limitation of amended/new claim(s) in Remarks is strongly requested for clear and definite claim interpretations by Examiner.
Priority
Acknowledgment is made of applicant's claim for the present application filed on 06/22/2023.
Claim Objections
Claim(s) 1-20 is/are objected to because of the following informalities.
Claim(s) 1 is/are objected to because of the following informalities:
it appears that “a convergence criteria” (in step G) needs to read “a convergence criterion” or something else. Appropriate correction is required. In addition, “the convergence criteria” (in step H) is/are objected to for the same reason. In addition, claim(s) 12, 19 is/are objected to for the same reason. In addition, claim(s) 4, 15 is/are objected to for the same reason. In addition, claim(s) 5, 16 is/are objected to for the same reason.
it appears that “the final subset of selected features” (in step I) needs to read “the selected final subset of features” or something else. Appropriate correction is required. In addition, claim(s) 12, 19 is/are objected to for the same reason. In addition, claim(s) 11, 20 is/are objected to for the same reason.
Claim(s) 10 is/are objected to because of the following informalities: it appears that “the list of final list of features” (line 3) needs to read “the final list of selected features” or something else. Appropriate correction is required.
Claim(s) 1, 4-5, 10-12, 15-16, 19-20 each recite(s) limitations that raise issues of indefiniteness as set forth above, and their dependent claims are objected to at least based on their direct and/or indirect dependency from the claims listed above. Appropriate explanation and/or amendment is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim(s) 1 recite(s) the limitation “the outcome variable” (line 8). There is insufficient antecedent basis for this limitation in the claim. It is not clear what it is referring to. It appears it may need to read “an outcome variable”, or something else. For the purposes of examination, “an outcome variable” is used. In addition, claim(s) 12, 19 is/are rejected for the same reason.
Claim(s) 1 recite(s) the limitation “the plurality of features that were not selected randomly” (in step D). There is insufficient antecedent basis for this limitation in the claim. It is not clear what it is referring to since the step (B) says “randomly selecting the plurality of features”. It appears that they are contradictory. It appears it may need to read “a second plurality of features that were not selected randomly”, or something else. For the purposes of examination, “a second plurality of features that were not selected randomly” is used. In addition, claim(s) 12, 19 is/are rejected for the same reason.
Claim(s) 2 recite(s) the limitation “the feature selection process” (line 1). There is insufficient antecedent basis for this limitation in the claim. It is not clear what it is referring to. It appears it may need to read “a feature selection process”, or something else. For the purposes of examination, “a feature selection process” is used. In addition, claim(s) 13 is/are rejected for the same reason.
The term “similar” (claim 2, line 4) is a relative term which renders the claim indefinite. The term “similar” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. In addition, claim(s) 13 is/are rejected for the same reason.
Claim(s) 2 recite(s) the limitation “the corresponding randomly selected subset of features” (line 5). There is insufficient antecedent basis for this limitation in the claim. It is not clear what it is referring to. It appears it may need to read “a corresponding randomly selected subset of features”, or something else. For the purposes of examination, “a corresponding randomly selected subset of features” is used.
Claim(s) 2 recite(s) the limitation “the subset of features” (line 7). There is insufficient antecedent basis for this limitation in the claim. It is not clear what it is referring to since it may indicate “a first subset of features”, “a second subset of features”, “a final subset of features” or something else. It appears it may need to read “a subset of features”, or something else. For the purposes of examination, “a subset of features” is used. In addition, claim(s) 13 is/are rejected for the same reason.
Claim(s) 4 recite(s) the limitation “the current iteration” (line 2). There is insufficient antecedent basis for this limitation in the claim. It is not clear what it is referring to. It appears it may need to read “a current iteration”, or something else. For the purposes of examination, “a current iteration” is used. In addition, claim(s) 5 is/are rejected for the same reason. In addition, claim(s) 15, 16 is/are rejected for the same reason.
Claim(s) 9 recite(s) the limitation “the total number of controls” (line 3). There is insufficient antecedent basis for this limitation in the claim. It is not clear what it is referring to. It appears it may need to read “a total number of controls”, or something else. For the purposes of examination, “a total number of controls” is used.
The term “similar” (claim 9, line 6) is a relative term which renders the claim indefinite. The term “similar” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention.
Claim(s) 13 recite(s) the limitation “they” (2nd last line). There is insufficient antecedent basis for this limitation in the claim. It is not clear what it is referring to. It appears it may need to read “features”, or something else. For the purposes of examination, “features” is used.
Claim(s) 1-2, 4-5, 9, 12-13, 15-16, 19 each recite(s) limitations that raise issues of indefiniteness as set forth above, and their dependent claims are rejected at least based on their direct and/or indirect dependency from the claims listed above. Appropriate explanation and/or amendment is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding claim 1
The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: The claim recites a method; therefore, it falls into the statutory category of processes.
Step 2A Prong 1:
The limitations of
“… method for … comprising using a dataset of features and an outcome:
(A) generating a table for a dataset comprising a plurality of features, …;
(B) randomly selecting the plurality of features from the dataset, thereby creating a first subset of features;
(C) operating a propensity score matching using the randomly selected plurality of features to identify a subset of cases and controls using the outcome variable;
(D) rewarding one or more features of a second subset of features consisting of features in the plurality of features that were not selected randomly, each feature of the second subset addresses a statistical significance criteria;
(E) updating each entry in the table with a reward distance between each pair of features;
(F) calculating a cumulative reward measure;
…;
(H) selecting a final subset of features when a variability criteria of the cumulative reward measure addresses the convergence criteria; and
…”, as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper).
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
The claim recites additional elements that are mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See MPEP 2106.05(f). In particular, the claim recites an additional element(s) (“A computer-implemented”) – using a device and/or a model to process data. The device and the model in each step are recited at a high-level of generality (i.e., as a generic computer performing a generic computer function of processing data) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
In particular, the claim recites an additional element(s) (“training a predictive model”, “(I) training the predictive model using the final subset of selected features”). The additional element is recited at such a high level without any details as to how a model is trained such that it amounts to only the idea of a solution or outcome because it fails to recite details of how a solution to a problem is accomplished, and, therefore, represents no more than mere instructions to apply the judicial exception on a computer (see MPEP 2106.05(f)). Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
In particular, the claim recites an additional element (“the table containing numerical values for each pair of features in the dataset”). This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not integrate the abstract idea into a practical application. See MPEP 2106.05(h)
In particular, the claim recites an additional element(s) (“(G) iterating steps (B)-(F) until a convergence criteria is met”) – the act of repeating. The claim is adding an insignificant extra-solution activity to the judicial exception – see MPEP 2106.05(g). The act of repeating is recited at a high-level of generality (i.e., as a generic act of performing a generic act function of repeating) such that it amounts no more than a mere act to apply the exception using a generic act of repeating. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
As discussed above, with respect to integration of the abstract idea into a practical application, the additional elements of using a generic computer component to perform each step amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. MPEP 2106.05(f).
The additional elements regarding training are recited at such a high level without any details as to how a model is trained such that it amounts to only the idea of a solution or outcome because it fails to recite details of how a solution to a problem is accomplished, and, therefore, represents no more than mere instructions to apply the judicial exception on a computer (see MPEP 2106.05(f)). Accordingly, this additional element does not amount to significantly more than the abstract idea. The claim is directed to an abstract idea.
This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not amount to significantly more than the abstract idea. See MPEP 2106.05(h).
As discussed above, the claim recites the additional element(s) of repeating at a high-level of generality and is adding an insignificant extra-solution activity – see MPEP 2106.05(g). However, the addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood, routine, and conventional. See MPEP 2106.05(d)(II) – “Performing repetitive calculations”. Accordingly, this additional element does not provide an inventive concept and significantly more than the abstract idea. Thus, the claim is not patent eligible.
Regarding claim 2
The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: The claim recites a method; therefore, it falls into the statutory category of processes.
Step 2A Prong 1:
The limitations of
“matching cases and controls to select a plurality of case-control subsets for the first subset of features, …; and
identifying, for each case-control subset, a plurality of features absent from the subset of features used to match the case-control subset, in which such features of the second subset of features address a statistical significance criteria”, as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper).
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
In particular, the claim recites an additional element (“each case-control subset having similar values for the corresponding randomly selected subset of features”). This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not integrate the abstract idea into a practical application. See MPEP 2106.05(h)
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not amount to significantly more than the abstract idea. See MPEP 2106.05(h).
Regarding claim 3
The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: The claim recites a method; therefore, it falls into the statutory category of processes.
Step 2A Prong 1: The claim recites the abstract idea identified above regarding claim 1.
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
In particular, the claim recites an additional element (“wherein the cumulative reward measure is a sum of all values in the table divided by a number of iterations”). This is a recitation of a particular type or source of data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not integrate the abstract idea into a practical application. See MPEP 2106.05(h)
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
This is a recitation of a particular type or source of data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not amount to significantly more than the abstract idea. See MPEP 2106.05(h).
Regarding claim 4
The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: The claim recites a method; therefore, it falls into the statutory category of processes.
Step 2A Prong 1:
The limitations of
“wherein the convergence criteria is met based on a deviation of a calculated cumulative reward measure of the current iteration from an immediately prior calculated cumulative reward measure being less than a predetermined threshold”, as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper).
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. In particular, the claim does not recite additional elements. Thus, the claim is directed to an abstract idea.
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Thus, the claim is not patent eligible.
Regarding claim 5
The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: The claim recites a method; therefore, it falls into the statutory category of processes.
Step 2A Prong 1:
The limitations of
“wherein the convergence criteria is met based on a deviation of a calculated cumulative reward measure of the current iteration from a moving average cumulative reward measure of an immediately previous set of iterations being less than a predetermined threshold”, as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper).
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. In particular, the claim does not recite additional elements. Thus, the claim is directed to an abstract idea.
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Thus, the claim is not patent eligible.
Regarding claim 6
The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: The claim recites a method; therefore, it falls into the statutory category of processes.
Step 2A Prong 1:
The limitations of
“wherein rewarding one or more features of the second subset of features comprises rewarding each feature that addresses a statistical significance threshold in the second subset of features by a constant value”, as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper).
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. In particular, the claim does not recite additional elements. Thus, the claim is directed to an abstract idea.
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Thus, the claim is not patent eligible.
Regarding claim 7
The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: The claim recites a method; therefore, it falls into the statutory category of processes.
Step 2A Prong 1:
The limitations of
“wherein rewarding one or more features of the second subset of features comprises rewarding each feature that addresses a statistical significance threshold in the second subset of features by a variable value, …”, as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper).
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
In particular, the claim recites an additional element (“wherein the variable value is a function of at least one of a type of feature, a number of iterations, and a number of selected features”). This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not integrate the abstract idea into a practical application. See MPEP 2106.05(h)
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
This is a recitation of a particular type or source of model/data to be used in performing the abstract idea. Limiting the abstract idea to a particular type or source of model/data is an attempt to limit the abstract idea to a particular field of use or technological environment, which does not amount to significantly more than the abstract idea. See MPEP 2106.05(h).
Regarding claim 8
The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: The claim recites a method; therefore, it falls into the statutory category of processes.
Step 2A Prong 1: The claim recites the abstract idea identified above regarding claim 1.
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
The claim recites additional elements that are mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See MPEP 2106.05(f). In particular, the claim recites an additional element(s) (“applying the predictive model to predict outcomes”) – using a device and/or a model to process data. The device and the model in each step are recited at a high-level of generality (i.e., as a generic computer performing a generic computer function of processing data) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
As discussed above, with respect to integration of the abstract idea into a practical application, the additional elements of using a generic computer component to perform each step amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. MPEP 2106.05(f).
Regarding claim 9
The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: The claim recites a method; therefore, it falls into the statutory category of processes.
Step 2A Prong 1:
The limitations of
“selecting a case-control ratio value to determine the total number of controls to identify per case; and
selecting a caliper value to identify a subset of controls associated with features of similar values to the features of the cases”, as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper).
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. In particular, the claim does not recite additional elements. Thus, the claim is directed to an abstract idea.
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Thus, the claim is not patent eligible.
Regarding claim 10
The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: The claim recites a method; therefore, it falls into the statutory category of processes.
Step 2A Prong 1:
The limitations of
“determining a final list of selected features when convergence criteria is met, each feature in the list of final list of features received a positive reward”, as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper).
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. In particular, the claim does not recite additional elements. Thus, the claim is directed to an abstract idea.
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Thus, the claim is not patent eligible.
Regarding claim 11
The claim is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: The claim recites a method; therefore, it falls into the statutory category of processes.
Step 2A Prong 1:
The limitations of
“evaluating the predictive model against a reference model to validate accuracy of the predictive model using the final subset of selected features, …”, as drafted, are a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is, nothing in the claim element precludes the step from practically being performed in the mind. For example, the limitations in the context of this claim encompass the user mentally thinking with a physical aid (e.g., pencil and paper).
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
The claim recites additional elements that are mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea. See MPEP 2106.05(f). In particular, the claim recites an additional element(s) (“wherein the predictive model and the reference models are trained using the dataset”) – using a device and/or a model to process data. The device and the model in each step are recited at a high-level of generality (i.e., as a generic computer performing a generic computer function of processing data) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
As discussed above, with respect to integration of the abstract idea into a practical application, the additional elements of using a generic computer component to perform each step amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible. MPEP 2106.05(f).
Regarding claim 12
The claim recites “A computer system for training a predictive model, the computer system comprising: one or more computer processors; one or more computer readable storage media; program instructions stored on the one or more computer readable storage media for execution by at least one of the one or more computer processors, the program instructions comprising instructions using a dataset of features and an outcome to:” to perform precisely the method of Claim 1. As performance of an abstract idea on generic computer components (see MPEP 2106.05(f)) and “Storing and retrieving information in memory” (see MPEP 2106.05(g) on Insignificant Extra-Solution Activity, and MPEP 2106.05(d) on Well-Understood, Routine, Conventional Activity) cannot integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself, the claim is rejected for reasons set forth in the rejection of Claim 1.
Regarding claim 13
The claim is rejected for the reasons set forth in the rejection of a combination of Claims 1 and 2 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception.
Regarding claim 14
The claim is rejected for the reasons set forth in the rejection of Claim 3 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception.
Regarding claim 15
The claim is rejected for the reasons set forth in the rejection of Claim 4 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception.
Regarding claim 16
The claim is rejected for the reasons set forth in the rejection of Claim 5 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception.
Regarding claim 17
The claim is rejected for the reasons set forth in the rejection of Claim 6 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception.
Regarding claim 18
The claim is rejected for the reasons set forth in the rejection of Claim 7 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception.
Regarding claim 19
The claim recites “A computer program product for training a predictive model, the computer program product comprising one or more computer readable storage media collectively having program instructions embodied therewith, the program instructions executable by a computer to cause the computer to rank features using a dataset, features, and an outcome:” to perform precisely the method of Claim 1. As performance of an abstract idea on generic computer components (see MPEP 2106.05(f)) and “Storing and retrieving information in memory” (see MPEP 2106.05(g) on Insignificant Extra-Solution Activity, and MPEP 2106.05(d) on Well-Understood, Routine, Conventional Activity) cannot integrate the abstract idea into a practical application nor provide significantly more than the abstract idea itself, the claim is rejected for reasons set forth in the rejection of Claim 1.
Regarding claim 20
The claim is rejected for the reasons set forth in the rejection of Claim 11 under 35 U.S.C. 101, mutatis mutandis, as reciting an abstract idea without integrating the judicial exception into a practical application nor providing significantly more than the judicial exception.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-3, 6-14, 17-20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Kartoun et al. (US 20210342735 A1)
Regarding claim 1
Kartoun teaches
A computer-implemented method for training a predictive model comprising using a dataset of features and an outcome:
(Kartoun [fig(s) 5] [par(s) 57-75] “computer” [par(s) 3] “According to one embodiment of the present invention, a computer system trains a predictive model. A plurality of subsets of features are selected from a dataset comprising a plurality of cases and controls and a plurality of features. Cases and controls are matched to select a plurality of case-control subsets for each subset of features, each case-control subset having similar values for the corresponding subset of features. For each case-control subset, a statistical significance of each feature of the plurality of features absent from the subset of features used to match the case-control subset is identified. A final subset of features is selected based on the statistical significance of each feature for the plurality of case-control subsets. A predictive model is trained using the final subset of features.” [par(s) 25] “Feature subset module 130 processes an input dataset containing features and outcomes to identify different subsets of features for use in subpopulation analysis.”;)
(A) generating a table for a dataset comprising a plurality of features, the table containing numerical values for each pair of features in the dataset;
(Kartoun [par(s) 25] “Feature subset module 130 processes an input dataset containing features and outcomes to identify different subsets of features for use in subpopulation analysis. A dataset may include a plurality of records that each include values for various features and outcomes. Each feature, also referred to as a covariate or variable or attribute, includes a value that describes a record in some manner. For example, a clinical dataset may include features of age, gender, disease status, laboratory observation, administered medication status, and the like, along with an outcome of interest. Additionally or alternatively, features can be extracted from clinical narrative notes using conventional or other natural language processing techniques. Thus, each record in the clinical dataset includes values for the features that together describe a patient. Additionally, each record specifies an outcome (e.g., “recovered” or “not recovered”). Records that include true values (e.g., “1”) for the outcome of interest are referred to as cases, and records that include false values (e.g., “0”) for the outcome of interest are referred to as controls. In some embodiments, a dataset may be arranged as a tabular two-dimension data frame. For example, a set of clinical data that describes 43,000 patients in terms of 199 features may have 200 columns (one for each of the 199 features, and one indicating an outcome) and 43,000 rows (each of which includes a single patient's values for the 199 features and an outcome).”;)
(B) randomly selecting the plurality of features from the dataset, thereby creating a first subset of features;
(Kartoun [par(s) 26] “Feature subset module 130 identifies different subsets of features by randomly assigning features to subsets. The number of features that feature subset module 130 assigns to a given subset may be predetermined or defined by some input parameter, which can be provided by a user of client device 105. In some embodiments, a subset's number of features may be much smaller than the overall number of features of a dataset. For example, in a dataset containing 199 features, each subset may include ten features. In some embodiments, feature subset module 130 assigns features to subsets using an exhaustive approach until all of a dataset's features are assigned. For example, in a dataset of 199 features and an outcome, ten features may be selected at random out of the 199 for a first subset, another ten features may be randomly selected out of the remaining 189 features, etc. In some embodiments, features are randomly selected out of the entire available set of features, resulting in different subsets that may share one or more features in common.”;)
(C) operating a propensity score matching using the randomly selected plurality of features to identify a subset of cases and controls using the outcome variable;
(Kartoun [par(s) 28] “Propensity score matching module 135 applies one or more propensity score matching techniques to a dataset to identify, for each subset of features, a subset of cases and controls that are similar in terms of their values for the subset of features. In particular, propensity score matching module 135 identifies case-control subsets by applying propensity score matching and filtering results using a caliper value and a case-control ratio value. The propensity score matching is based on the outcome variable and on the subset of features selected by feature subset module 130. In particular, a propensity score can be calculated for feature of a record with respect to the outcome, and caliper values and case-control ratio values are used to filter the results to identify matchings.” [sec(s) 30] “Thus, propensity score matching module 135 identifies a case-control subset for each feature subset, with each case-control subset containing both cases and controls that share similar values for features of the corresponding feature subset, but have different outcomes (as cases have different outcomes from controls by definition). Each case-control subset identified by propensity score matching module 135 is processed by feature selection module 140 to select features, which are used by machine learning module 145 to train and evaluate a model using the selected features.”;)
(D) rewarding one or more features of a second subset of features consisting of features in the plurality of features that were not selected randomly, each feature of the second subset addresses a statistical significance criteria;
(Kartoun [par(s) 31] “Feature selection module 140 analyzes values of features of each case-control subset to identify features that are associated with the outcome in a statistically significant manner. Specifically, while a case-control subset includes cases and controls that have very similar values for the subset of features used to match those cases and controls, feature selection module 140 analyzes values of cases and controls for the features that were not included in the subset of features. For example, if a case-control subset contains records that are matched according to a subset of ten particular features, and a dataset has 199 features overall, then feature selection module 140 will analyze the values for the remaining 189 features in order to identify features that are relevant to distinguishing the difference in outcome between cases and controls.” [sec(s) 33] “Once feature selection module 140 determines p-values for each feature of a case-control subset, excluding the features used to match cases to controls, feature selection module 140 may rank the features according to p-value. Feature selection module 140 may determine whether each feature of a case-control subset has a p-value that satisfies a predetermined significance threshold. For example, feature selection module 140 may identify features having a p-value of less than 0.001. Feature selection module 140 may assign a selection score for each feature that corresponds to the number of case-control subsets in which the feature's p-value satisfies the significance threshold. For example, feature selection module 140 may assign a single point to a feature's selection score for every instance of the feature's p-value that satisfies a significance threshold in a given case-control subset.” [par(s) 49] “A selection score for each identified feature is adjusted at operation 330. Each feature that is identified using the significance threshold may be noted by increasing a value of the feature's selection score. For example, a point may be rewarded to a feature every time that the feature is identified as significant in a particular case-control subset”;)
(E) updating each entry in the table with a reward distance between each pair of features;
(Kartoun [par(s) 33] “Once feature selection module 140 determines p-values for each feature of a case-control subset, excluding the features used to match cases to controls, feature selection module 140 may rank the features according to p-value. Feature selection module 140 may determine whether each feature of a case-control subset has a p-value that satisfies a predetermined significance threshold. For example, feature selection module 140 may identify features having a p-value of less than 0.001. Feature selection module 140 may assign a selection score for each feature that corresponds to the number of case-control subsets in which the feature's p-value satisfies the significance threshold. For example, feature selection module 140 may assign a single point to a feature's selection score for every instance of the feature's p-value that satisfies a significance threshold in a given case-control subset.”;)
(F) calculating a cumulative reward measure;
(Kartoun [par(s) 33-34] “Feature selection module 140 may assign a selection score for each feature that corresponds to the number of case-control subsets in which the feature's p-value satisfies the significance threshold. For example, feature selection module 140 may assign a single point to a feature's selection score for every instance of the feature's p-value that satisfies a significance threshold in a given case-control subset. When feature selection module 140 has processed all of the case-control subsets to obtain selection scores for each feature in a dataset, the features may be ranked according to selection score, and a final subset of features may be selected for training a model. In some embodiments, feature selection module 140 compares the selection scores of each feature to a selection threshold value, and selects all features that satisfy the selection threshold value. In some embodiments, feature selection module 140 selects a predefined number of features having the highest selection scores. In some embodiments, feature selection module 140 selects features whose selection scores are at or above a particular percentile (e.g., a top 5% of features).”;)
(G) iterating steps (B)-(F) until a convergence criteria is met;
(Kartoun [par(s) 27] “In some embodiments, feature subset module 130 generates a predetermined or defined number of subsets of features. Alternatively, feature subset module 130 may exhaustively assign features until there are no remaining unassigned features in a dataset. Feature subset module 130 may identify a subset of features for each unique combination of features.” [par(s) 33-34] “When feature selection module 140 has processed all of the case-control subsets to obtain selection scores for each feature in a dataset, the features may be ranked according to selection score, and a final subset of features may be selected for training a model. In some embodiments, feature selection module 140 compares the selection scores of each feature to a selection threshold value, and selects all features that satisfy the selection threshold value. In some embodiments, feature selection module 140 selects a predefined number of features having the highest selection scores. In some embodiments, feature selection module 140 selects features whose selection scores are at or above a particular percentile (e.g., a top 5% of features).”;)
(H) selecting a final subset of features when a variability criteria of the cumulative reward measure addresses the convergence criteria; and
(Kartoun [par(s) 33-34] “When feature selection module 140 has processed all of the case-control subsets to obtain selection scores for each feature in a dataset, the features may be ranked according to selection score, and a final subset of features may be selected for training a model. In some embodiments, feature selection module 140 compares the selection scores of each feature to a selection threshold value, and selects all features that satisfy the selection threshold value. In some embodiments, feature selection module 140 selects a predefined number of features having the highest selection scores. In some embodiments, feature selection module 140 selects features whose selection scores are at or above a particular percentile (e.g., a top 5% of features).”;)
(I) training the predictive model using the final subset of selected features.
(Kartoun [par(s) 33-35] “Machine learning module 145 trains data models, using the values of selected features, to perform outcome forecasting. Machine learning module 145 may train a data model using the features selected by feature selection module 140 to forecast outcomes. Machine learning module 145 may train models using the selected feature values for all records of a dataset, or may train models using the selected feature values for a subpopulation of a dataset. Machine learning module 145 may apply conventional or other machine learning techniques to train models. In some embodiments, machine learning module 145 utilizes logistic regression to train a predictive model.” [par(s) 3] “A predictive model is trained using the final subset of features.”;)
Regarding claim 2
Kartoun teaches claim 1.
wherein the feature selection process comprises: (See claim 1)
Kartoun further teaches
matching cases and controls to select a plurality of case-control subsets for the first subset of features,
(Kartoun [par(s) 3] “Cases and controls are matched to select a plurality of case-control subsets for each subset of features, each case-control subset having similar values for the corresponding subset of features.” [par(s) 30] “Thus, propensity score matching module 135 identifies a case-control subset for each feature subset, with each case-control subset containing both cases and controls that share similar values for features of the corresponding feature subset, but have different outcomes (as cases have different outcomes from controls by definition). Each case-control subset identified by propensity score matching module 135 is processed by feature selection module 140 to select features, which are used by machine learning module 145 to train and evaluate a model using the selected features” [par(s) 28] “Propensity score matching module 135 applies one or more propensity score matching techniques to a dataset to identify, for each subset of features, a subset of cases and controls that are similar in terms of their values for the subset of features. In particular, propensity score matching module 135 identifies case-control subsets by applying propensity score matching and filtering results using a caliper value and a case-control ratio value.”;)
each case-control subset having similar values for the corresponding randomly selected subset of features; and
(Kartoun [par(s) 3] “Cases and controls are matched to select a plurality of case-control subsets for each subset of features, each case-control subset having similar values for the corresponding subset of features.” [par(s) 30] “Thus, propensity score matching module 135 identifies a case-control subset for each feature subset, with each case-control subset containing both cases and controls that share similar values for features of the corresponding feature subset, but have different outcomes (as cases have different outcomes from controls by definition). Each case-control subset identified by propensity score matching module 135 is processed by feature selection module 140 to select features, which are used by machine learning module 145 to train and evaluate a model using the selected features” [par(s) 28] “Propensity score matching module 135 applies one or more propensity score matching techniques to a dataset to identify, for each subset of features, a subset of cases and controls that are similar in terms of their values for the subset of features. In particular, propensity score matching module 135 identifies case-control subsets by applying propensity score matching and filtering results using a caliper value and a case-control ratio value.”;)
identifying, for each case-control subset, a plurality of features absent from the subset of features used to match the case-control subset,
(Kartoun [par(s) 31] “Feature selection module 140 analyzes values of features of each case-control subset to identify features that are associated with the outcome in a statistically significant manner. Specifically, while a case-control subset includes cases and controls that have very similar values for the subset of features used to match those cases and controls, feature selection module 140 analyzes values of cases and controls for the features that were not included in the subset of features. For example, if a case-control subset contains records that are matched according to a subset of ten particular features, and a dataset has 199 features overall, then feature selection module 140 will analyze the values for the remaining 189 features in order to identify features that are relevant to distinguishing the difference in outcome between cases and controls.”;)
in which such features of the second subset of features address a statistical significance criteria.
(Kartoun [par(s) 31] “Feature selection module 140 analyzes values of features of each case-control subset to identify features that are associated with the outcome in a statistically significant manner. Specifically, while a case-control subset includes cases and controls that have very similar values for the subset of features used to match those cases and controls, feature selection module 140 analyzes values of cases and controls for the features that were not included in the subset of features.” [par(s) 33] “Once feature selection module 140 determines p-values for each feature of a case-control subset, excluding the features used to match cases to controls, feature selection module 140 may rank the features according to p-value. Feature selection module 140 may determine whether each feature of a case-control subset has a p-value that satisfies a predetermined significance threshold. For example, feature selection module 140 may identify features having a p-value of less than 0.001. … feature selection module 140 may assign a single point to a feature's selection score for every instance of the feature's p-value that satisfies a significance threshold in a given case-control subset.” [par(s) 48] “Features of a case-control subset whose statistical significance satisfy a significance threshold are identified at operation 320.”;)
Regarding claim 3
Kartoun teaches claim 1.
Kartoun further teaches
wherein the cumulative reward measure is a sum of all values in the table divided by a number of iterations.
(Kartoun [par(s) 33-35] “Feature selection module 140 may assign a selection score for each feature that corresponds to the number of case-control subsets in which the feature's p-value satisfies the significance threshold. For example, feature selection module 140 may assign a single point to a feature's selection score for every instance of the feature's p-value that satisfies a significance threshold in a given case-control subset. When feature selection module 140 has processed all of the case-control subsets to obtain selection scores for each feature in a dataset, the features may be ranked according to selection score, and a final subset of features may be selected for training a model. In some embodiments, feature selection module 140 compares the selection scores of each feature to a selection threshold value, and selects all features that satisfy the selection threshold value. In some embodiments, feature selection module 140 selects a predefined number of features having the highest selection scores. In some embodiments, feature selection module 140 selects features whose selection scores are at or above a particular percentile (e.g., a top 5% of features).” [par(s) 4] “By using a probability score, significance of different types of features can all be compared, including categorical features, continuous features that are normally distributed, and continuous features not normally distributed. In some embodiments, the selection threshold value comprises a percentage of case-control subsets in which the statistical significance of the feature satisfies the significance threshold value” See also [sec(s) 51]; e.g., “selection score” read(s) on “a sum of all values in the table”. In addition, e.g., “percentage” and/or “percentile” read(s) on “cumulative reward measure”.)
Regarding claim 6
Kartoun teaches claim 1.
Kartoun further teaches
wherein rewarding one or more features of the second subset of features comprises rewarding each feature that addresses a statistical significance threshold in the second subset of features by a constant value.
(Kartoun [par(s) 33] “Once feature selection module 140 determines p-values for each feature of a case-control subset, excluding the features used to match cases to controls, feature selection module 140 may rank the features according to p-value. Feature selection module 140 may determine whether each feature of a case-control subset has a p-value that satisfies a predetermined significance threshold. For example, feature selection module 140 may identify features having a p-value of less than 0.001. Feature selection module 140 may assign a selection score for each feature that corresponds to the number of case-control subsets in which the feature's p-value satisfies the significance threshold. For example, feature selection module 140 may assign a single point to a feature's selection score for every instance of the feature's p-value that satisfies a significance threshold in a given case-control subset.” [par(s) 48] “Features of a case-control subset whose statistical significance satisfy a significance threshold are identified at operation 320. Feature selection module 140 may compare a probability value (p-value) of a feature to a predetermined threshold to identify features that are particularly significant. For example, feature selection module 140 may identify a feature when the feature's p-value is less than 0.001, less than or equal to 0.05, and the like.”;)
Regarding claim 7
Kartoun teaches claim 1.
Kartoun further teaches
wherein rewarding one or more features of the second subset of features comprises rewarding each feature that addresses a statistical significance threshold in the second subset of features by a variable value, wherein the variable value is a function of at least one of a type of feature, a number of iterations, and a number of selected features.
(Kartoun [par(s) 33-35] “Feature selection module 140 may assign a selection score for each feature that corresponds to the number of case-control subsets in which the feature's p-value satisfies the significance threshold. For example, feature selection module 140 may assign a single point to a feature's selection score for every instance of the feature's p-value that satisfies a significance threshold in a given case-control subset. When feature selection module 140 has processed all of the case-control subsets to obtain selection scores for each feature in a dataset, the features may be ranked according to selection score, and a final subset of features may be selected for training a model. In some embodiments, feature selection module 140 compares the selection scores of each feature to a selection threshold value, and selects all features that satisfy the selection threshold value. In some embodiments, feature selection module 140 selects a predefined number of features having the highest selection scores. In some embodiments, feature selection module 140 selects features whose selection scores are at or above a particular percentile (e.g., a top 5% of features).” [par(s) 4] “the plurality of features are ranked by selection score to select the final subset of features having selection scores that satisfy a selection threshold value. By selecting features that are the most statistically significant across a large number of different case-control subsets, present invention embodiments ensure that a model is trained on features most likely to be highly relevant to the outcome. In some embodiments, the significance threshold value comprises a probability score of the feature. By using a probability score, significance of different types of features can all be compared, including categorical features, continuous features that are normally distributed, and continuous features not normally distributed. In some embodiments, the selection threshold value comprises a percentage of case-control subsets in which the statistical significance of the feature satisfies the significance threshold value” See also [sec(s) 51];)
Regarding claim 8
Kartoun teaches claim 1.
Kartoun further teaches
further comprising applying the predictive model to predict outcomes.
(Kartoun [par(s) 4] “Various other embodiments of the present invention will now be discussed. In some embodiments, the predictive model is applied to predict outcomes. Thus, unknown outcomes can be predicted more efficiently while ensuring the accuracy of forecasted outcomes. In some embodiments, a selection score is determined for each feature of the plurality of features, wherein the selection score corresponds to a number of case-control subsets in which the statistical significance of the feature satisfies a significance threshold value, and the plurality of features are ranked by selection score to select the final subset of features having selection scores that satisfy a selection threshold value. By selecting features that are the most statistically significant across a large number of different case-control subsets, present invention embodiments ensure that a model is trained on features most likely to be highly relevant to the outcome. In some embodiments, the significance threshold value comprises a probability score of the feature. By using a probability score, significance of different types of features can all be compared, including categorical features, continuous features that are normally distributed, and continuous features not normally distributed. In some embodiments, the selection threshold value comprises a percentage of case-control subsets in which the statistical significance of the feature satisfies the significance threshold value.”;)
Regarding claim 9
Kartoun teaches claim 1.
wherein selecting subset of features comprises: (See claim 1)
Kartoun further teaches
selecting a case-control ratio value to determine the total number of controls to identify per case; and
(Kartoun [par(s) 4] “By evaluating a predictive model's performance, present invention embodiments can ensure that the model's predictions are more accurate in comparison with commonly used feature selection methods. In some embodiments, each case-control subset is matched according to propensity score matching with a caliper value and a case-control ratio value. Thus, a subset of cases is matched to controls that are most similar in terms of the values of the features used to match the cases and controls.” [par(s) 15] “Thus, a subset of cases is matched to controls that are most similar in terms of the values of the features used to match the cases and controls.” [par(s) 28-29] “Propensity score matching module 135 applies one or more propensity score matching techniques to a dataset to identify, for each subset of features, a subset of cases and controls that are similar in terms of their values for the subset of features. In particular, propensity score matching module 135 identifies case-control subsets by applying propensity score matching and filtering results using a caliper value and a case-control ratio value. The propensity score matching is based on the outcome variable and on the subset of features selected by feature subset module 130. In particular, a propensity score can be calculated for feature of a record with respect to the outcome, and caliper values and case-control ratio values are used to filter the results to identify matchings. The propensity score for a particular record is defined as the conditional probability of the outcome given the record's feature values. A caliper value is a numerical value that is multiplied with a standard deviation for a selected case value to define a range of acceptable control values that can be matched with the case. … Propensity score matching module 135 may apply caliper values and case-control ratio values that are predefined or user-defined.”;)
selecting a caliper value to identify a subset of controls associated with features of similar values to the features of the cases.
(Kartoun [par(s) 4] “By evaluating a predictive model's performance, present invention embodiments can ensure that the model's predictions are more accurate in comparison with commonly used feature selection methods. In some embodiments, each case-control subset is matched according to propensity score matching with a caliper value and a case-control ratio value. Thus, a subset of cases is matched to controls that are most similar in terms of the values of the features used to match the cases and controls.” [par(s) 15] “Thus, a subset of cases is matched to controls that are most similar in terms of the values of the features used to match the cases and controls.” [par(s) 28-29] “Propensity score matching module 135 applies one or more propensity score matching techniques to a dataset to identify, for each subset of features, a subset of cases and controls that are similar in terms of their values for the subset of features. In particular, propensity score matching module 135 identifies case-control subsets by applying propensity score matching and filtering results using a caliper value and a case-control ratio value. The propensity score matching is based on the outcome variable and on the subset of features selected by feature subset module 130. In particular, a propensity score can be calculated for feature of a record with respect to the outcome, and caliper values and case-control ratio values are used to filter the results to identify matchings. The propensity score for a particular record is defined as the conditional probability of the outcome given the record's feature values. A caliper value is a numerical value that is multiplied with a standard deviation for a selected case value to define a range of acceptable control values that can be matched with the case. … Propensity score matching module 135 may apply caliper values and case-control ratio values that are predefined or user-defined.”;)
Regarding claim 10
Kartoun teaches claim 1.
Kartoun further teaches
determining a final list of selected features when convergence criteria is met, each feature in the list of final list of features received a positive reward.
(Kartoun [par(s) 33-34] “Feature selection module 140 may assign a selection score for each feature that corresponds to the number of case-control subsets in which the feature's p-value satisfies the significance threshold. For example, feature selection module 140 may assign a single point to a feature's selection score for every instance of the feature's p-value that satisfies a significance threshold in a given case-control subset. When feature selection module 140 has processed all of the case-control subsets to obtain selection scores for each feature in a dataset, the features may be ranked according to selection score, and a final subset of features may be selected for training a model. In some embodiments, feature selection module 140 compares the selection scores of each feature to a selection threshold value, and selects all features that satisfy the selection threshold value.” [par(s) 50] “Operation 340 determines whether there are any remaining case-control subsets whose features have not yet been evaluated in terms of statistical significance. If there are any additional unprocessed case-control subsets, then a next case-control subset is selected at operation 350 and its features are processed to identify significant features and to update the selection scores of identified features.”;)
Regarding claim 11
Kartoun teaches claim 1.
Kartoun further teaches
evaluating the predictive model against a reference model to validate accuracy of the predictive model using the final subset of selected features, wherein the predictive model and the reference models are trained using the dataset.
(Kartoun [par(s) 35] “Machine learning module 145 trains data models, using the values of selected features, to perform outcome forecasting. Machine learning module 145 may train a data model using the features selected by feature selection module 140 to forecast outcomes. Machine learning module 145 may train models using the selected feature values for all records of a dataset, or may train models using the selected feature values for a subpopulation of a dataset. Machine learning module 145 may apply conventional or other machine learning techniques to train models. In some embodiments, machine learning module 145 utilizes logistic regression to train a predictive model.” [par(s) 55] “The AUC value of the tested model is compared to a reference AUC value at operation 430. The reference AUC value may be computed similarly to the AUC value of the tested model using a different model. If the AUC values are close, then the tested model's accuracy is approximately the same as the reference model's accuracy. If the AUC value of the tested model is higher than the reference AUC value, then the tested model may forecast outcomes more accurately than the reference model. Thus, when a tested model uses fewer features than the reference model, and both models have comparable AUC values, then the tested model demonstrates superior efficiency and should be recommended over the reference model.”;)
Regarding claim 12
The claim is a system claim corresponding to the method claim 1, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the method claim.
Regarding claim 13
The claim is a system claim corresponding to a combination of the method claims 1 and 2, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the combination of method claims.
Regarding claim 14
The claim is a system claim corresponding to the method claim 3, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the method claim.
Regarding claim 17
The claim is a system claim corresponding to the method claim 6, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the method claim.
Regarding claim 18
The claim is a system claim corresponding to the method claim 7, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the method claim.
Regarding claim 19
The claim is a computer program product claim corresponding to the method claim 1, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the method claim.
Regarding claim 20
The claim is a computer program product claim corresponding to the method claim 11, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the method claim.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 4, 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kartoun et al. (US 20210342735 A1) in view of Sharma et al. (US 20150112182 A1)
Regarding claim 4
Kartoun teaches claim 3.
Kartoun further teaches
wherein the convergence criteria is met [based on] a deviation of a calculated cumulative reward measure of the current iteration from an immediately prior calculated cumulative reward measure being less than a predetermined threshold.
(Kartoun [par(s) 34] “When feature selection module 140 has processed all of the case-control subsets to obtain selection scores for each feature in a dataset, the features may be ranked according to selection score, and a final subset of features may be selected for training a model. In some embodiments, feature selection module 140 compares the selection scores of each feature to a selection threshold value, and selects all features that satisfy the selection threshold value.” [par(s) 50] “Operation 340 determines whether there are any remaining case-control subsets whose features have not yet been evaluated in terms of statistical significance. If there are any additional unprocessed case-control subsets, then a next case-control subset is selected at operation 350 and its features are processed to identify significant features and to update the selection scores of identified features.” [par(s) 55] “The AUC value of the tested model is compared to a reference AUC value at operation 430. The reference AUC value may be computed similarly to the AUC value of the tested model using a different model. If the AUC values are close, then the tested model's accuracy is approximately the same as the reference model's accuracy. If the AUC value of the tested model is higher than the reference AUC value, then the tested model may forecast outcomes more accurately than the reference model. Thus, when a tested model uses fewer features than the reference model, and both models have comparable AUC values, then the tested model demonstrates superior efficiency and should be recommended over the reference model.”;)
However, Kartoun does not appear to explicitly teach:
wherein the convergence criteria is met [based on] a deviation of a calculated cumulative reward measure of the current iteration from an immediately prior calculated cumulative reward measure being less than a predetermined threshold.
Sharma teaches
wherein the convergence criteria is met based on a deviation of a calculated cumulative reward measure of the current iteration from an immediately prior calculated cumulative reward measure being less than a predetermined threshold.
(Sharma [par(s) 67] “At step 410, it is determined whether the IBRR method has converged. In order for the method to converge, it is determined whether a stop condition is met. For example, convergence can be achieved if the cost function is less than the minimum cost function Jmin. It is also possible that convergence is achieved when the maximum number of iterations Tmax occurred, when the approximation error rt(x) is less than a certain threshold, when the difference between the cost function at the previous step and the current step is less than a certain threshold, or when the difference between the approximation error at the previous step and the current step is less than a certain threshold. If the EIBRR algorithm has not converged at 410, the algorithm returns to step 404 and repeats steps 404, 406, and 408 until convergence is achieved. If the EIBRR algorithm has converged at step 510, the trained regression function is stored or output. The trained regression function resulting from the method can be stored in a memory or storage of a computer system or output for use in determining hemodynamic indices, such as FFR, in new patient datasets.”;)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Kartoun with the convergence criteria of Sharma.
One of ordinary skill in the art would have been motived to combine in order to improve the prediction precision of the trained machine learning model by eliminating samples far from the solution (non-overlapping) and generating samples closer to the true position.
(Sharma [par(s) 88] “The set of current hypotheses are then propagated through the trained deep neural network, and in a possible embodiment, the new set of hypotheses can be iteratively refined using the same deep neural network or through a newly trained deep neural network. This iterative process can eliminate samples far from the solution (non-overlapping) and generate samples closer to the true position to improve precision. The new set of hypotheses is augmented with new parameters from the subsequent marginal space and the process is repeated for the subsequent marginal space. This results in a respective trained deep neural network (regressor or discriminative deep neural network) for each of the marginal spaces.”)
Regarding claim 15
The claim is a system claim corresponding to the method claim 4, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the method claim.
Claim(s) 5, 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kartoun et al. (US 20210342735 A1) in view of Ouimet et al. (US 20050273376 A1)
Regarding claim 5
Kartoun teaches claim 3.
Kartoun further teaches
wherein the convergence criteria is met [based on] a deviation of a calculated cumulative reward measure of the current iteration from a [moving average] cumulative reward measure of an immediately previous set of iterations being less than a predetermined threshold.
(Kartoun [par(s) 34] “When feature selection module 140 has processed all of the case-control subsets to obtain selection scores for each feature in a dataset, the features may be ranked according to selection score, and a final subset of features may be selected for training a model. In some embodiments, feature selection module 140 compares the selection scores of each feature to a selection threshold value, and selects all features that satisfy the selection threshold value.” [par(s) 50] “Operation 340 determines whether there are any remaining case-control subsets whose features have not yet been evaluated in terms of statistical significance. If there are any additional unprocessed case-control subsets, then a next case-control subset is selected at operation 350 and its features are processed to identify significant features and to update the selection scores of identified features.” [par(s) 55] “The AUC value of the tested model is compared to a reference AUC value at operation 430. The reference AUC value may be computed similarly to the AUC value of the tested model using a different model. If the AUC values are close, then the tested model's accuracy is approximately the same as the reference model's accuracy. If the AUC value of the tested model is higher than the reference AUC value, then the tested model may forecast outcomes more accurately than the reference model. Thus, when a tested model uses fewer features than the reference model, and both models have comparable AUC values, then the tested model demonstrates superior efficiency and should be recommended over the reference model.”;)
However, Kartoun does not appear to explicitly teach:
wherein the convergence criteria is met [based on] a deviation of a calculated cumulative reward measure of the current iteration from a [moving average] cumulative reward measure of an immediately previous set of iterations being less than a predetermined threshold.
Ouimet teaches
wherein the convergence criteria is met based on a deviation of a calculated cumulative reward measure of the current iteration from a moving average cumulative reward measure of an immediately previous set of iterations being less than a predetermined threshold.
(Ouimet [fig(s) 5] [par(s) 80] “The Maximum Likelihood Method repeats until the difference between the j-th value of P(Bobs) and the j+1-th value of P(Bobs) is less than an error threshold ϵ, or until a stopping criteria has been reached, wherein iterative solutions of P(Bobs) are no longer changing by an appreciable or predetermined amount. The error threshold ϵ or stopping criteria is selected according to desired tolerance and accuracy of the solution” [par(s) 98] “The Maximum Likelihood Method repeats until the difference between the j-th value of P(bOBS|BOBS) and the j+1-th value of P(bOBS|BOBS) is less than an error threshold ϵ, or until a stopping criteria has been reached, wherein iterative solutions of P(bOBS|BOBS) are no longer changing by an appreciable or predetermined amount. The error threshold ϵ or stopping criteria is selected according to desired tolerance and accuracy of the solution” [par(s) 112] “A set of graphs for unit sales of one product Pi <USpi> as a function of time is shown in FIG. 5. Plot 72 represents actual unit sales of product Pi sold in store Si over the time period. Plot 74 represents forecast of product Pi under promotion using promotional model 14. Plot 76 represents baseline (no promotion) of product Pi. Plot 78 represents a moving average of units sales of product Pi from plot 72. FIG. 5 illustrates the time series of the expected value of unit sales for product Pi <USpi> has been defined in equation (1) in terms of a product combination of expected values of factors (traffic, share, count) which influence the customer buying decision. Report 16 may include FIG. 5 as part of its forecast information.”;)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Kartoun with the convergence criteria of Ouimet.
One of ordinary skill in the art would have been motived to combine in order to provide tools for a successful, scientific approach to promotional programs with a high degree of confidence/accuracy.
(Ouimet [par(s) 34-35] “In particular, economic modeling is essential to businesses which face thin profit margins, Such as general customer merchandise and other retail outlets. Clearly, many businesses are keenly interested in economic modeling and forecasting, particularly when the model provides a high degree of accuracy or confidence. Such information is a powerful tool and highly valuable to the business. The present discussion will consider economic modeling as applied to retail merchandising. In particular, understanding the cause and effect behind promotional offerings is important to increasing the profitability of the retail Stores. The present invention addresses effective modeling techniques for various promotions, in terms of forecasting and backcasting, and provides tools for a successful, scientific approach to promotional programs with a high degree of confidence.”)
Regarding claim 16
The claim is a system claim corresponding to the method claim 5, and is directed to largely the same subject matter. Thus, it is rejected for the same reasons as given in the rejections of the method claim.
Prior Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Kartoun et al. (US 20210343421 A1) teaches sub-populations.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SEHWAN KIM whose telephone number is (571)270-7409. The examiner can normally be reached Mon - Fri 9:00 AM - 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael J Huntley can be reached on (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SEHWAN KIM/Examiner, Art Unit 2129
1/31/2026