DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-20 are presented for examination.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on April 19, 2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Drawings
The drawings are objected to because reference character 110 appears in the drawings but not the specification. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Specification
The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification.
The use of the term BLUETOOTH (paragraphs 40 and 46), which is a trade name or a mark used in commerce, has been noted in this application. The term should be accompanied by the generic terminology; furthermore, the term should be capitalized wherever it appears or, where appropriate, include a proper symbol indicating use in commerce such as ™, SM , or ® following the term.
Although the use of trade names and marks used in commerce (i.e., trademarks, service marks, certification marks, and collective marks) is permissible in patent applications, the proprietary nature of the marks should be respected and every effort made to prevent their use in any manner which might adversely affect their validity as commercial marks.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 8 and 17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 8 and 17 recite the limitation "the data collection output". There is insufficient antecedent basis for this limitation in the claims. For purposes of examination, Examiner will presume that Applicant meant “datapoint collection output” as recited in claims 1 and 11, respectively.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of the claims will follow the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50 (“2019 PEG”).
Claim 1
Step 1: The claim recites a method; therefore, it is directed to the statutory category of processes.
Step 2A Prong 1: The claim recites, inter alia:
[G]enerating … a datapoint priority matrix that corresponds to a plurality of entity-feature value pairs of a training dataset for a machine learning model: This limitation could encompass mentally generating the matrix.
[G]enerating … a plurality of impact predictions for the plurality of entity-feature value pairs, wherein an impact prediction of the plurality of impact predictions is indicative of a likelihood of a modification to an entity-feature value pair of the plurality of entity-feature value pairs through one or more data collection operations: This limitation could encompass mentally generating the impact predictions.
[G]enerating … a plurality of feature sensitivity predictions for the plurality of entity-feature value pairs, wherein a feature sensitivity prediction of the plurality of feature sensitivity predictions is indicative of a feature-level performance impact of the entity-feature value pair on the machine learning model: This limitation could encompass mentally generating the sensitivity predictions.
[G]enerating … a refined datapoint priority matrix by updating the datapoint priority matrix based on the plurality of impact predictions and the plurality of feature sensitivity predictions: This limitation could encompass mentally generating the refined matrix.
[P]roviding … a datapoint collection output for the training dataset based on the refined datapoint priority matrix and a data augmentation threshold, wherein the datapoint collection output is indicative of a data collection operation of the one or more data collection operations: This limitation could encompass mentally determining what data collection operations should be performed based on the matrix and a threshold.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites that the method is performed “by one or more processors”. However, this amounts to a mere instruction to apply the judicial exception using a generic computer. MPEP § 2106.05(f).
Step 2B: The claim does not contain significantly more than the judicial exception. The analysis at this step mirrors that of step 2A, prong 2. As an ordered whole, the claim is directed to a mentally performable process of generating data about a machine learning model and using the data to determine how to collect data. Nothing in the claim provides significantly more than this. As such, the claim is not patent eligible.
Claim 2
Step 1: A process, as above.
Step 2A Prong 1: The claim recites that “the entity-feature value pair corresponds to an entity and a predictive feature of the entity, and … the impact prediction is based on at least one of (i) one or more feature-level attributes of the predictive feature or (ii) one or more entity-level attributes of the entity.” Generating the impact predictions for the entity-value pairs remains mentally performable under these further assumptions.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 1 analysis.
Step 2B: The claim does not contain significantly more than the judicial exception. See claim 1 analysis.
Claim 3
Step 1: A process, as above.
Step 2A Prong 1: The claim recites, inter alia, that “the one or more feature-level attributes are indicative of a predictive feature miss rate for the predictive feature and the one or more entity-level attributes are indicative of a predictive entity miss rate for the entity.” Generating the impact predictions for the entity-value pairs using the attributes remains mentally performable under these further assumptions.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 2 analysis.
Step 2B: The claim does not contain significantly more than the judicial exception. See claim 2 analysis.
Claim 4
Step 1: A process, as above.
Step 2A Prong 1: The claim recites, inter alia, “updating at least one of the predictive feature miss rate or the predictive entity miss rate based on the collection feedback data.” This limitation could encompass mentally updating the miss rates based on the feedback data.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “receiving collection feedback data based on the performance of the data collection operation”. However, this limitation recites the insignificant extra-solution activity of mere data gathering and output. MPEP § 2106.05(g).
Step 2B: The claim does not contain significantly more than the judicial exception. The claim further recites “receiving collection feedback data based on the performance of the data collection operation”. However, this limitation recites the well-understood, routine, and conventional activity of receiving or transmitting data over a network. MPEP § 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network).
Claim 5
Step 1: A process, as above.
Step 2A Prong 1: The claim recites:
[G]enerating a plurality of entity sensitivity predictions, wherein an entity sensitivity prediction of the plurality of entity sensitivity predictions is indicative of an entity-level performance impact of the entity-feature value pair on the machine learning model: This limitation could encompass mentally generating the predictions.
[G]enerating the refined datapoint priority matrix based on the plurality of entity sensitivity predictions: This limitation could encompass mentally generating the matrix.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 1 analysis.
Step 2B: The claim does not contain significantly more than the judicial exception. See claim 1 analysis.
Claim 6
Step 1: A process, as above.
Step 2A Prong 1: The claim recites, inter alia, “iteratively generating the plurality of impact predictions, the plurality of feature sensitivity predictions, and the plurality of entity sensitivity predictions based on the observation matrix.” This limitation could encompass mentally generating the predictions.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “receiving an observation matrix for the training dataset, wherein the observation matrix is indicative of a subset of unobserved entity-feature value and a subset of observed entity-feature values from the training dataset”. However, this limitation recites the insignificant extra-solution activity of mere data gathering and output. MPEP § 2106.05(g).
Step 2B: The claim does not contain significantly more than the judicial exception. The claim further recites “receiving an observation matrix for the training dataset, wherein the observation matrix is indicative of a subset of unobserved entity-feature value and a subset of observed entity-feature values from the training dataset”. However, this limitation recites the well-understood, routine, and conventional activity of receiving or transmitting data over a network. MPEP § 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network).
Claim 7
Step 1: A process, as above.
Step 2A Prong 1: The claim recites that “the refined datapoint priority matrix comprises a datapoint value prediction that is based on an aggregation of the impact prediction, the feature sensitivity prediction, and the entity sensitivity prediction.” This limitation could encompass mentally generating the matrix based on an aggregation of the three prediction values.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. See claim 5 analysis.
Step 2B: The claim does not contain significantly more than the judicial exception. See claim 5 analysis.
Claim 8
Step 1: A process, as above.
Step 2A Prong 1: The claim recites, inter alia, that “the refined datapoint priority matrix comprises a plurality of datapoint value predictions corresponding to the plurality of entity-feature value pairs of the training dataset.” Generating the refined matrix remains mentally performable under these further assumptions. The claim further recites “ generating … the data collection output based on the refined datapoint priority matrix, the cost matrix, and the data augmentation threshold.” This limitation could encompass mentally generating the output based on the claimed matrices and threshold.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites that the generating is performed by a “combinatoric optimization model”. This limitation amounts to a mere instruction to apply the judicial exception using a generic computer. MPEP § 2106.05(f). The claim further recites “receiving a cost matrix that comprises a plurality of cost values corresponding to the plurality of entity-feature value pairs; [and] receiving the data augmentation threshold indicative of a limit on the one or more data collection operations”. These limitations are directed to the insignificant extra-solution activity of mere data gathering. MPEP § 21065.05(g).
Step 2B: The claim does not contain significantly more than the judicial exception. The claim further recites “receiving a cost matrix that comprises a plurality of cost values corresponding to the plurality of entity-feature value pairs; [and] receiving the data augmentation threshold indicative of a limit on the one or more data collection operations”. These limitations are directed to the well-understood, routine, and conventional activity of receiving or transmitting data over a network. MPEP § 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network). Otherwise, the analysis is identical to that of step 2A, prong 2.
Claim 9
Step 1: A process, as above.
Step 2A Prong 1: The claim recites, inter alia:
[I]dentifying a subset of unobserved entity-feature values from the plurality of entity-feature value pairs: This limitation could encompass mentally identifying the subset of values.
[G]enerating … the plurality of feature sensitivity predictions based on the subset of unobserved entity-feature values, wherein the feature-level performance impact of the entity-feature value pair is indicative of marginal performance contribution of a predictive feature relative to the subset of unobserved entity-feature values: This limitation could encompass mentally generating the predictions.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites that the generation of the sensitivity predictions is performed “using an interpretable model”. However, this limitation amounts to a mere instruction to apply the judicial exception using a generic computer programmed with a generic class of computer algorithm. MPEP § 2106.05(f).
Step 2B: The claim does not contain significantly more than the judicial exception. The claim further recites that the generation of the sensitivity predictions is performed “using an interpretable model”. However, this limitation amounts to a mere instruction to apply the judicial exception using a generic computer programmed with a generic class of computer algorithm. MPEP § 2106.05(f).
Claim 10
Step 1: A process, as above.
Step 2A Prong 1: The claim recites, inter alia, “updating the subset of unobserved entity-feature values based on the collection feedback data.” This limitation could encompass mentally updating the subset of values.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The claim further recites “receiving collection feedback data based on the performance of the data collection operation”. However, this limitation recites the insignificant extra-solution activity of mere data gathering and output. MPEP § 2106.05(g).
Step 2B: The claim does not contain significantly more than the judicial exception. The claim further recites “receiving collection feedback data based on the performance of the data collection operation”. However, this limitation recites the well-understood, routine, and conventional activity of receiving or transmitting data over a network. MPEP § 2106.05(d)(II); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network).
Claims 11-17
Step 1: The claims recite a system comprising a memory and processors; therefore, they are directed to the statutory category of machines.
Step 2A Prong 1: The claims recite the same judicial exceptions as in claims 1-3 and 5-8, respectively.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The analysis of the claims at this step mirrors that of claims 1-3 and 5-8, respectively, except insofar as these claims recite a “computing system comprising memory and one or more processors communicatively coupled to the memory, the one or more processors configured to [perform the method]”. However, this amounts to a mere instruction to apply the judicial exception using a generic computer. MPEP § 2106.05(f).
Step 2B: The claim does not contain significantly more than the judicial exception. The analysis of the claims at this step mirrors that of claims 1-3 and 5-8, respectively, except insofar as these claims recite a “computing system comprising memory and one or more processors communicatively coupled to the memory, the one or more processors configured to [perform the method]”. However, this amounts to a mere instruction to apply the judicial exception using a generic computer. MPEP § 2106.05(f).
Claims 18-20
Step 1: The claims recite non-transitory computer-readable storage media; therefore, they are directed to the statutory category of articles of manufacture.
Step 2A Prong 1: The claims recite the same judicial exceptions as in claims 1 and 9-10, respectively.
Step 2A Prong 2: This judicial exception is not integrated into a practical application. The analysis of the claims at this step mirrors that of claims 1 and 9-10, respectively, except insofar as these claims recite “[o]ne or more non-transitory computer-readable storage media including instructions that, when executed by one or more processors, cause the one or more processors to [perform the method]”. However, this amounts to a mere instruction to apply the judicial exception using a generic computer. MPEP § 2106.05(f).
Step 2B: The claim does not contain significantly more than the judicial exception. The analysis of the claims at this step mirrors that of claims 1 and 9-10, respectively, except insofar as these claims recite “[o]ne or more non-transitory computer-readable storage media including instructions that, when executed by one or more processors, cause the one or more processors to [perform the method]”. However, this amounts to a mere instruction to apply the judicial exception using a generic computer. MPEP § 2106.05(f).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-2, 5, 9-12, 14, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Taneja et al. (US 20230042330) (“Taneja”) in view of Soda et al. (US 20200118020) (“Soda”).
Regarding claim 1, Taneja discloses “[a] computer-implemented method (Taneja Fig. 15 discloses that the method is executed using a processor 1502 and a memory 1504), the computer-implemented method comprising:
generating, by one or more processors, a datapoint priority matrix that corresponds to a plurality of entity-feature value pairs of a training dataset for a machine learning model (modeling tool includes a model trained on a dataset consisting of an m x (l + k)-dimensional input [entity] feature matrix X and an outcome vector Y [feature value] of dimension m [so the combination of this matrix and vector is the datapoint priority matrix] – Taneja, paragraph 45);
generating, by the one or more processors, a plurality of impact predictions for the plurality of entity-feature value pairs (Mi is a k-dimensional feature vector including k features not measured for subject I; a set of missing features Min may be selected from Mi; for any given Min, diagnostic engine assigns three values; one value is a vector of size n indicative of the maximum allowable noise in the measurement of each missing feature in the n-set [impact predictions] – Taneja, paragraph 45) …;
generating, by the one or more processors, a plurality of feature sensitivity predictions for the plurality of entity-feature value pairs, wherein a feature sensitivity prediction of the plurality of feature sensitivity predictions is indicative of a feature-level performance impact of the entity-feature value pair on the machine learning model (Mi is a k-dimensional feature vector including k features not measured for subject I; a set of missing features Min may be selected from Mi; for any given Min, diagnostic engine assigns three values; one value is a scalar value s indicative of the importance of the n-set with respect to Y, the patient outcome – Taneja, paragraph 45; see also paragraph 76 (disclosing the computation of a variable importance [sensitivity prediction indicative of feature-level performance impact] of each feature in the input feature matrix X));
generating, by the one or more processors, a refined datapoint priority matrix by updating the datapoint priority matrix based on the plurality of impact predictions and the plurality of feature sensitivity predictions (method for ranking one or more features in a dataset may involve a diagnostic tool outputting a ranking for the relevance of features in terms of predicting the patient outcome with a high confidence level; the diagnostic engine may include a constraint function in terms of importance, cost, and time to collect; features in the set may be presented in descending order according to the value of the constraint function [i.e., the input features are refined by reordering them in terms of importance/feature sensitivity predictions] – Taneja, paragraphs 56-58; see also Fig. 8 and paragraph 88 (disclosing that the noise tolerance for the unmeasured features [impact predictions] is identified, a measurement device is selected based on the noise tolerance, and the device is used to collect new observations, i.e., the refinement of the input matrix is also based on the noise tolerance)); and
providing, by the one or more processors, a datapoint collection output for the training dataset based on the refined datapoint priority matrix and a data augmentation threshold, wherein the datapoint collection output is indicative of a data collection operation of the one or more data collection operations (Taneja Fig. 8 and paragraphs 87-88 disclose that if the risk of an adverse event is greater than a threshold, the system suggests new features to measure [i.e., how to augment the data, so that the risk threshold also functions as a data augmentation threshold] by ranking variables by importance; the measurement device/noise tolerance are then identified [once the ranking and noise tolerance calculations are performed, the result is a refined datapoint priority matrix] and new observations are collected [datapoint collection output indicative of a data collection operation]; see also paragraph 45 (disclosing that the dataset is used to train the model)).”
Taneja appears not to disclose explicitly the further limitations of the claim. However, Soda discloses that “an impact prediction … is indicative of a likelihood of a modification to [a datum] of the plurality of [data] through one or more data collection operations (Soda paragraphs 113, 78, and 7, among others, describe calculating a probability of occurrence of an event for collection of data and a probability that the data collection will be completed by a specified date and time [i.e., a likelihood of a modification to the data through data collection]) ….”
Soda and the instant application both relate to data collection and are analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Taneja to predict the likelihood that a datum will be collected, as disclosed by Soda, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would allow for greater certainty over what information is likely to be collected and when, thereby allowing for more informed decisions to be made with the data. See Soda, paragraphs 4-6.
Claim 11 is a system claim corresponding to method claim 1 and is rejected for the same reasons as given in the rejection of that claim. Similarly, claim 18 is a non-transitory computer-readable storage medium claim corresponding to method claim 1 and is rejected for the same reasons as given in the rejection of that claim.
Regarding claim 2, Taneja, as modified by Soda, discloses that “the entity-feature value pair corresponds to an entity and a predictive feature of the entity, and … the impact prediction is based on at least one of (i) one or more feature-level attributes of the predictive feature or (ii) one or more entity-level attributes of the entity (modeling tool includes a model trained on a dataset consisting of an m x (l + k)-dimensional input [entity] feature matrix X and an outcome vector Y [predictive feature] of dimension m; Mi is a k-dimensional feature vector including k features not measured for subject I; a set of missing features Min may be selected from Mi; for any given Min, diagnostic engine assigns three values; one value is a vector of size n indicative of the maximum allowable noise in the measurement of each missing feature in the n-set [i.e., the impact predictions are attributes of the missing input data/entity] – Taneja, paragraph 45 – Taneja, paragraph 45).”
Claim 12 is a system claim corresponding to method claim 2 and is rejected for the same reasons as given in the rejection of that claim.
Regarding claim 5, Taneja, as modified by Soda, discloses that “generating a plurality of entity sensitivity predictions, wherein an entity sensitivity prediction of the plurality of entity sensitivity predictions is indicative of an entity-level performance impact of the entity-feature value pair on the machine learning model (method for ranking one or more features in a dataset may involve a diagnostic tool outputting a ranking for the relevance of features in terms of predicting the patient outcome with a high confidence level; the diagnostic engine may include a constraint function in terms of importance, cost, and time to collect; features in the set may be presented in descending order according to the value of the constraint function [i.e., the input features are refined by reordering them in terms of importance/feature sensitivity predictions; note that, since these are features of the input, they are entity-level] – Taneja, paragraphs 56-58); and
generating the refined datapoint priority matrix based on the plurality of entity sensitivity predictions (method for ranking one or more features in a dataset may involve a diagnostic tool outputting a ranking for the relevance of features in terms of predicting the patient outcome with a high confidence level; the diagnostic engine may include a constraint function in terms of importance, cost, and time to collect; features in the set may be presented in descending order according to the value of the constraint function [i.e., the input features are refined by reordering them in terms of importance/feature sensitivity predictions] – Taneja, paragraphs 56-58; see also Fig. 8 and paragraph 88 (disclosing that the noise tolerance for the unmeasured features [impact predictions] is identified, a measurement device is selected based on the noise tolerance, and the device is used to collect new observations, i.e., the refinement of the input matrix is based on the feature ranking/entity sensitivity predictions)).”
Claim 14 is a system claim corresponding to method claim 5 and is rejected for the same reasons as given in the rejection of that claim.
Regarding claim 9, Taneja, as modified by Soda, discloses that “generating the plurality of feature sensitivity predictions comprises:
identifying a subset of unobserved entity-feature values from the plurality of entity-feature value pairs (Mi is a k-dimensional feature vector including k features not measured for subject I; a set of missing features Min [unobserved entity-feature values] may be selected from Mi; for any given Min, diagnostic engine assigns three values; one value is a scalar value s indicative of the importance of the n-set with respect to Y, the patient outcome – Taneja, paragraph 45); and
generating, using an interpretable model, the plurality of feature sensitivity predictions based on the subset of unobserved entity-feature values (machine learning algorithms are used to rank feature relevance [generate feature sensitivity predictions] according to quantifiable information available for a patient and a model trained on a dataset consisting of an input feature matrix and an outcome vector – Taneja, paragraph 35 [note that the fact that the model ranks feature importances implies interpretability]; ranking is for unmeasured features [i.e., the unobserved entity-feature values] for an instance given at least one other feature is measured – id. at paragraph 4), wherein the feature-level performance impact of the entity-feature value pair is indicative of marginal performance contribution of a predictive feature relative to the subset of unobserved entity-feature values (machine learning algorithms rank feature relevance [marginal performance contribution] according to quantifiable information available for a given patient and a model trained on a dataset consisting of an input feature matrix and an outcome vector – Taneja, paragraph 35; Mi is a k-dimensional feature vector including k features not measured for subject I; a set of missing features Min [unobserved entity-feature values] may be selected from Mi; for any given Min, diagnostic engine assigns three values; one value is a scalar value s indicative of the importance of the n-set with respect to Y, the patient outcome – id. at paragraph 45 [i.e., the system ranks the importance of the feature subset relative to other features]).”
Claim 19 is a non-transitory computer-readable storage medium claim corresponding to method claim 9 and is rejected for the same reasons as given in the rejection of that claim.
Regarding claim 10, the rejection of claim 9 is incorporated. Taneja further discloses a “subset of unobserved entity-feature values,” as shown in the rejection of claim 9.
Taneja appears not to disclose explicitly the further limitations of the claim. However, Soda discloses “receiving collection feedback data based on the performance of the data collection operation (data collection apparatus provides user terminal with a UI picture for guidance on changing of the condition; user terminal may display “scheduled date and time of completion”; data collection apparatus may search past results for similar conditions under which the collection will be completed by the desired date and time and display packages of recommendable condition formulae [collection feedback data] – Soda, paragraphs 54-56 and Fig. 1H); and
updating the … values based on the collection feedback data (data collection apparatus may search past results for similar conditions under which the collection will be completed by the desired date and time and display packages of recommendable condition formulae [collection feedback data]; if the data user selects a desired package from the list, the collection condition [values] can be changed [updated] easily so as to reflect the contents of the selected package).” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Taneja to update values based on data collection feedback, as disclosed by Soda, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would allow for greater certainty over what information is likely to be collected and when, thereby allowing for more informed decisions to be made with the data. See Soda, paragraphs 4-6.
Claim 20 is a non-transitory computer-readable storage medium claim corresponding to method claim 10 and is rejected for the same reasons as given in the rejection of that claim.
Claims 8 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Taneja in view of Soda and further in view of Mappus et al. (US 20230057537) (“Mappus”).
Regarding claim 8, the rejection of claim 1 is incorporated. Taneja further discloses that “the refined datapoint priority matrix comprises a plurality of datapoint value predictions corresponding to the plurality of entity-feature value pairs of the training dataset (modeling tool includes a model trained on a dataset consisting of an m x (l + k)-dimensional input [entity] feature matrix X and an outcome vector Y [feature value] of dimension m – Taneja, paragraph 45 [note that the outcome vector Y is a plurality of datapoint value predictions and corresponds to the feature values insofar as they are identical]; see also mapping to claim 1 supra and Fig. 8 for details of the refinement of the matrix), and the data collection output is generated by:
receiving a cost matrix that comprises a plurality of cost values corresponding to the plurality of entity-feature value pairs (diagnostic engine provides a ranking variable assigned to each feature or set of features; the diagnostic engine may suggest the missing features to be measured based on their rank and a constraint function that may include the cost of the feature [i.e., each missing feature in the matrix is associated with a cost] – Taneja, paragraph 55);
receiving the data augmentation threshold indicative of a limit on the one or more data collection operations (Taneja Fig. 8 and paragraphs 87-88 disclose that if the risk of an adverse event is greater than a threshold, the system suggests new features to measure [i.e., how to augment the data, so that the risk threshold also functions as a data augmentation threshold, so if the risk is less than a threshold, the data are not collected, so the risk threshold also functions as a limit on data collection] by ranking variables by importance); and
generating … the data collection output based on the refined datapoint priority matrix, the cost matrix, and the data augmentation threshold (Taneja Fig. 8 shows that the new observations [data collection output] are collected based on the identification of the measurement device and the ranking of the features [i.e., on the refined datapoint priority matrix, see mapping to claim 1 supra]; paragraph 87-88 disclose that new features are measured if the risk of an adverse event is greater than a threshold [i.e., the data collection is also based on the newly measured features, and therefore on the threshold]; paragraph 55 discloses that cost is taken into consideration in determining which features to measure [i.e., the data collection is also based on the cost matrix]).”
Neither Taneja nor Soda appears to disclose explicitly the further limitations of the claim. However, Mappus discloses “generating, using a combinatoric optimization model, the … output (assigning jobs to technicians can be solved as a combinatoric optimization problem [solution = output; problem = model] – Mappus, paragraph 30) ….”
Mappus and the instant application both relate to combinatoric optimization and are analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Taneja and Soda to provide the output using combinatoric optimization, as disclosed by Mappus, and an ordinary artisan could reasonably expect to have done so successfully. Doing so would allow the system to perform a more efficient search for the optimal solution when an exhaustive search is not possible. See Mappus, paragraph 30.
Claim 17 is a system claim corresponding to method claim 8 and is rejected for the same reasons as given in the rejection of that claim.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RYAN C VAUGHN whose telephone number is (571)272-4849. The examiner can normally be reached M-R 7:00a-5:00p ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kamran Afshar, can be reached at 571-272-7796. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RYAN C VAUGHN/ Primary Examiner, Art Unit 2125