DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/16/2025 has been entered.
Applicant's amendments and remarks, filed, 10/16/2025, are acknowledged. Rejections and/or objections not reiterated from previous office actions are hereby withdrawn. The following rejections and/or objections are either reiterated or newly applied. They constitute the complete set presently being applied to the instant application.
Status of Claims
Claims 1, 3-5, 7-19, 33, 34, 35 are pending and under examination.
Claims 2, 6, and 20-32 are cancelled. Claims 33-35 are newly added.
Priority
This application is a U.S. National Stage Application filed under 35 U.S.C. §371 of International Application No. PCT/US2019/062561, filed November 21, 2019 (Published as WO 2020/112478), which claims the benefit of and priority to U.S. Provisional Patent Application No. 62/773,028 filed November 29, 2018 and U.S. Provisional Patent Application No. 62/783,733 filed December 21, 2018.
Withdrawn Rejections
The rejection of claims 1 and 3-19 under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement is withdrawn in view of applicant’s amendments.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 3-5, 7-19, 33, 34, 35 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
The Supreme Court has established a two-step framework for this analysis, wherein a claim does not satisfy § 101 if (1) it is “directed to” a patent-ineligible concept, i.e., a law of nature, natural phenomenon, or abstract idea, and (2), if so, the particular elements of the claim, considered “both individually and ‘as an ordered combination,” do not add enough to “transform the nature of the claim into a patent-eligible application.” Elec. Power Grp., LLC v. Alstom S.A., 830 F.3d 1350, 1353 (Fed. Cir. 2016) (quoting Alice, 134 S. Ct. at 2355).
Guidance: Step 1.
Under the broadest reasonable interpretation, the claimed invention (claims 1 and 34) being representative) is directed to a method and system for performing a process. Accordingly, the invention falls within one of the four statutory categories.
A. Guidance Step 2A, Prong 1
The Revised Guidance instructs us first to determine whether any judicial exception to patent eligibility is recited in the claim. The Revised Guidance identifies three judicially-excepted groupings identified by the courts as abstract ideas: (1) mathematical concepts, (2) certain methods of organizing human behavior such as fundamental economic practices, and (3) mental processes. In this case, the claimed steps that are part of the abstract idea are as follows:
b) downsampling the class-imbalanced data set to generate a downsampled data set, wherein the downsampling results in the majority data class having an equivalent or substantially equivalent number of observations as the minority data class; and c) generating a survival model by performing cross-validation on the downsampled data set with a survival analysis; d) applying an elastic net penalty to the generated survival model;
e) determining an AUC, sensitivity, specificity, and/or C-index of the survival model having the penalty; wherein the observation comprises an event or no event at a specific time value…;
Mental Process
Under the broadest reasonable interpretation, the above italicized steps amount to manipulating data (by downsampling), generating a generic survival model, and analyzing the model (by performing statistical calculations). In addition, the specification provides sufficient evidence that the claims are directed to an abstract idea since the specific descriptions provided for accomplishing these tasks only involve data reception and analysis [0010, 0011, 0039]. Accordingly, but for the recitation of a computer processor, the above steps clearly fall within the mental process groupings of abstract ideas because they cover concepts performed in the human mind, including observation, evaluation, judgment, and opinion. See MPEP 2106.04(a)(2), subsection III [Step 2A, Prong 1: YES].
Mathematical Concept
In addition, the above step for generating a survival model requires “performing cross-validation” (i.e. a calculation and mathematically relating data) and the determining step requires calculating one or more metrics (AUC, sensitivity, specificity). A review of the specification [0008, 0011] various types of cross-validation including k-fold, Generalized Monte Carlo, leave-p-out cross-validation, or bootstrapping methods and that AUC is clearly associated with a numerical metric. Moreover, Kamarudin et al. (cited below in the rejection under 35 USC 103) teaches various methods for modeling disease risk using survival functions [see entire], wherein the model is well-defined in terms of mathematical parameters. LeDell et al. (Electron J Stat. 2015 ; 9(1): 1583–1607) also teaches that cross-validation and determining metrics such as AUC and sensitivity must be mathematically calculated (e.g. Sensitivity = TP / (TP + FN)) [Section 2]. It is noted that the grouping of “mathematical concepts” is not limited to formulas or equations, as words used in a claim operating on data to solve a problem can serve the same purpose as a formula. Therefore, absent any evidence to the contrary and when read on light of the specification, the above steps clearly encompass a mathematical concept. See MPEP 2106.04 and 2106.05(II) [Step 2A, Prong 1: YES].
B. Guidance Step 2A, Prong 2
This part of the eligibility analysis evaluates whether the claim includes any additional steps/elements that integrate the recited judicial exception into a practical application of the exception. In this case, the additional steps/elements that are not part of the abstract idea are as follows:
a) acquiring a class-imbalanced data set, wherein the class-imbalanced data set comprises biological data from a plurality of subjects, wherein the biological data of each subject includes an observation, a time value, and a plurality of clinical measurements…;
After careful consideration, the above step amounts to “insignificant extra-solution activity”, i.e. activities incidental to the primary process or product that are merely a nominal or tangential addition to the claim. In particular, under the broadest reasonable interpretation, the above step amounts to necessary gathering of data for use by the abstract idea. Therefore, the above steps do not integrate the judicial exception into a practical application. See MPEP 2106.05(g). Notably, claim 1 does not recite a processor. Claim 34 does recite a processor and memory, however these limitations are recited at high level of generality and read on a generic computer. Accordingly, these features are merely being used as tools to perform generic computer functions or the abstract idea, and therefore amount to no more than mere instructions to apply the exception using a generic computer. See MPEP 2106.05(f). Even when viewed in combination, these additional elements do not integrate the recited judicial exception into a practical application. [Step 2A, Prong 2: NO].
C. Guidance Step 2B:
Under the 2019 PEG, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B. In this case, the claims do not include additional steps and/or elements appended to the judicial exception that are sufficient to amount to significantly more than the judicial exception(s) for the following reasons:
As discussed above, the claim does not recite any additional steps/elements that amount to significantly more than the judicial exception. In addition, a review of the specification teaches routine and conventional computer systems and architecture (including a data acquisition module) [0058-0060, Figure 2] for performing the additional steps discussed above. Therefore, even upon reconsideration, there is nothing unconventional with regards to the above non-abstract steps and elements being claimed. See MPEP 2106.05(d)(Part II). Thus, the independent claim(s) as a whole do not amount to significantly more than the exception itself. Therefore, the claim(s) is/are not patent eligible. [Step 2B: NO].
Dependent Claims
Dependent claims 3-5, 7-19, 33, 35 have also been considered under the two-part analysis but do not include additional steps/elements appended to the judicial exception that are sufficient to amount to significantly more than the judicial exception(s) for the following reasons. Regarding claim(s) 3-5, 7-19, 33, 35, these are all directed to limitations that further limit the specificity of the abstract idea set forth above or the data being used by the abstract idea. Therefore, these claims also encompass a mental process and/or mathematical concepts for reasons discussed above in the Step 2A (prong 1) analysis. Therefore, the claims as a whole are not patent eligible.
Response to Arguments
Applicant’s arguments, filed 10/16/2025, have been fully considered but are not persuasive for the following reasons.
Applicant again argues that the claims do not recite an abstract idea under Prong One. In response, contrary to applicant’s assertions, the examiner has explicitly identified the steps that recite the abstract idea and provided sufficient reasoning as to why these steps are abstract (Step 2A, prong 1 analysis above). Under the BRI, the claimed steps encompass manipulating data (downsampling) and analyzing data (by generating a survival model and performing calculations). The courts are clear that an invention directed to the “collection, manipulation, and display of data” is an abstract process. See Intellectual Ventures, 850 F.3d at 1340; see generally id. at 1340-41. Applicant is also reminded that the Office's eligibility guidance does not set limit on the size of the data or number of calculations that can or cannot be performed mentally. MPEP § 2106.04(a)(2)III. In addition, the specification also provides sufficient evidence that the claims are directed to an abstract idea since the specific descriptions provided for accomplishing these tasks only involve data reception and analysis [0010, 0011, 0039]. Similarly, the examiner has clearly identified the generating and determining steps are reciting mathematical concepts, citing applicant’s own specification [0008, 0011] and the art of Kamarudin et al. and LeDell et al. as evidence. For the reasons set forth above, the examiner maintains that the above claims fall squarely within one or more judicial exceptions (in this case a mathematical concept-type abstract idea and a mental process-type abstract idea). Therefore, the examiner maintains that he has properly applied the two-step analysis as explained in MPEP 2106.04(a)(2), subsection III.
Applicant provides aggregate arguments that the generation of a survival model encompasses a machine learning concept that cannot be practically performed by the human mind and is therefore not a mental process (citing the August 4th USPTO Memo). In response, this argument is not persuasive for the following reasons. Firstly, applicant appears to be conflating survival analysis with machine learning. Survival analysis is a specific field of statistics, whereas machine learning is a broader set of algorithms used to find patterns in data. While it is true that there are survival models used in machine learning (e.g. for predicting the time until a specific event occurs), these models extend traditional survival analysis by using techniques like tree-based models (e.g., Random Survival Forests), neural networks, and Support Vector Machines (SVMs) to handle complex, high-dimensional datasets and non-linear relationships. In this case, the claimed “survival model” is generically recited are is not limited to any such machine learning techniques (claim 1 notably does not even recite a computer). In other words, there is no “AI innovation” or “machine learning algorithms” encompassed by the instant claims. Accordingly, the instant claims are not similar to the fact patters of any of the claims cited in the August Memo (or the July 2024 AI-SME Update).
Secondly, the August Memo points to the July 2024 AI-SME Update citing an example where a claim that does not recite a mental process because it cannot practically be performed in the human mind is not on point with the instant claims. In particular the cited AI-SME claim is directed to ‘‘a specific, hardware-based RFID serial number data structure’’ (i.e., an RFID transponder), where the data structure is uniquely encoded (i.e., there is ‘‘a unique correspondence between the data physically encoded on the [RFID transponder] with pre-authorized blocks of serial numbers’’), ADASA Inc. v. Avery Dennison Corp., 55 F.4th 900, 909 (Fed. Cir. 2022). See also MPEP 2106.04(a)(2), subsection III(A). The instant claims, on the other hand, do not recite any such limitations. In summary, neither Applicant nor the specification provides any objective evidence of an improvement to the technology, nor does the specification explain the details of an unconventional technical solution expressed in the claim, or identify technical improvements realized by the claim over the prior art. See MPEP 2106.04(d)(1) and MPEP 2106.05(a).
Applicant argues that the claimed invention integrates the abstract into a practical application by improving the functioning of computer technology and inventing operations that improve survival models, improve the predictions generated by survival models, and address issues in feature detection (citing the specification para. 0004, 0047, 0053). In response, the claimed invention results in determining metrics associated with a model. Notably, claim 1 does not even recite a processor and the processor and memory in claim 34 are recited at a high level of generality and read on a generic computer. Accordingly, the abstract idea is implemented on generic components that are recited at a high level of generality and are well-understood, routine, and conventional. As such, there is nothing in the specification to support applicant’s position that the claimed invention is directed to an improvement in a particular machine, computer, or computer functionality (as in Enfish). Unlike the McRO decision, where the ultimate product produced was a synchronized computer animation that was itself the transformative use, the result of the presently claimed method is information itself, without being directed to any particular use of that information. See also Elec. Power Grp., LLC v. Alstom S.A., 830 F.3d 1350, 1355 (Fed. Cir. 2016) (“[Merely selecting information, by content or source, for collection, analysis, and display does nothing significant to differentiate a process from ordinary mental processes.”). As such, the examiner maintains that the claims do not integrate the recited judicial exception into a practical application.
In addition, neither applicant nor the specification provides any objective evidence showing how the claimed invention improves the functioning of a computer. The examiner has also carefully reviewed the cited sections of the specification and determined that these sections amount to nothing more than a bare assertion of an improvement (without the detail necessary to be apparent to a person of ordinary skill in the art). Lastly, to the extent that applicant is arguing the downsampling, generating, applying, and determining steps are the inventive concept (and that the claims result in a “more accurate” survival model), this argument is not persuasive because these steps were all interpreted as part of the abstract idea and analyzed under the Step 2A (prong 1) analysis. Applicant is again reminded that the claimed invention’s use of the ineligible concept to which it is directed (i.e. the abstract idea) cannot supply the inventive concept that renders the invention ‘significantly more’ than that ineligible concept.” BSG Tech LLC v. BuySeasons, Inc., 899 F.3d 1281, 1290 (Fed. Cir. 2018). It is also well settled that mere computer-based efficiency does not save an otherwise abstract method. Bancorp Servs. L.L.C. v. Sun Life Assur. Co. of Canada (US.), 687 F.3d 1266, 1277-78 (Fed. Cir. 2012) (explaining that performance by computer of operations that previously were performed manually or mentally, albeit less efficiently, does not convert a known abstract idea into eligible subject matter). As such, the claims do not integrate the recited judicial exception into a practical application.
Applicant additionally argues that the examiner’s rejection under 35 USC 101 goes against the guidance of the USPTO (citing Ex Parte Desjardins). In response, unlike the instant claims, the claims in Ex Parte Desjardins are directed to a method for training a machine learning model, comprise a plurality of different active method steps directed to model training. Therefore, applications arguments are not persuasive because the instant claims are directed to an entirely different fact pattern. Similar to Recentive v. Fox, applicant is reminded that AI and machine learning claims need to demonstrate a genuine technological advancement beyond merely applying generic ML to a new use case or achieving increased speed and efficiency to be considered patent-eligible under Section 101. See also MPEP 2106.04(d)(1) for a list of considerations for evaluating whether additional elements integrate a judicial exception into a practical application.
In addition, applicant is reminded that Board decisions are not “official USPTO guidance” unless they are designated as precedential and is strongly encouraged to the review the recent Standard Operating Procedure (SOP) set forth by the Office regarding the new Precedential Opinion Panel (POP) process:
https://www.uspto.gov/sites/default/files/documents/SOP2%20R10%20FINAL.pdf .
At page 3: The Board enters thousands of decisions every year. Every decision other than a precedential decision by the Precedential Opinion Panel is, by default, a routine decision. A routine decision is binding in the case in which it is made, even if it is not designated as precedential or informative, but it is not otherwise binding authority.
The SOP details the processes by which a Board decision can be designated as precedential. Here is a list of the precedential and informative decisions, which is not static in time:
https://www.uspto.gov/patents-application-process/patent-trial-and-appeal-board/precedential-informative-decisions
On 35 U.S.C. 101 in particular, MPEP 2106.07(a) instructs: Examiners should not go beyond those concepts that are enumerated as abstract ideas in MPEP § 2106.04, unless they are identifying a tentative abstract idea in the claim, and should avoid relying upon or citing non-precedential decisions unless the facts of the application under examination uniquely match the facts at issue in the non-precedential decisions.
In summary, unless designated otherwise, PTAB decisions are only precedential in the case where they’re rendered, and even then, they are only binding with respect to the particular matter that was before the Board. Accordingly, applicant has not established a “unique match” between the instant claims and the facts of the PTAB decisions in the unrelated application. As such, the examiner maintains that the Desjardins decision is not informative with respect to the instant application and is neither precedential case law nor official USPTO guidance. Therefore, for these reasons and those set forth above in Step 2A (prong 1) analysis, the examiner maintains that the claims indeed recite an abstract idea. For at least these reasons, the rejection is maintained.
Claim rejections - 35 USC § 112, 2nd Paragraph
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1, 3-5, 7-19, 33, 34, 35 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claims that depend directly or indirectly from claim(s) 1 and 34 is/are also rejected due to said dependency.
Claims 1 and 34 recite “generating a survival model by performing cross-validation…”. In this case, it is unclear in what way the claimed survival model is being “generated” as claimed. The specification teaches that "cross-validation" refers to any model building and validation technique for assessing model performance on the data used to build the model and how the results of a statistical analysis will generalize to an independent data set [0039]. However, one of ordinary skill in the art would recognize that “cross-validation” is a method for evaluating the performance of a model that has been previously selected, per se, which is not the same as “generating” a model. In other words, if one is comparing different survival models or tuning parameters, performing cross-validation results will enable one to select the best-performing model or optimal parameters. Accordingly, the claim is indefinite as it is unclear what computational operations are encompassed by the claimed “generating” (since cross-validation is a form of analysis does not result in “generating” a model, per se). Applicant is reminded of MPEP 2111.01, section IV: An applicant is entitled to be his or her own lexicographer and may rebut the presumption that claim terms are to be given their ordinary and customary meaning by clearly setting forth a definition of the term that is different from its ordinary and customary meaning(s). See In re Paulsen, 30 F.3d 1475, 1480, 31 USPQ2d 1671, 1674 (Fed. Cir. 1994).
Claims 1 and 34 recite “applying an elastic net penalty to the generated survival model”. It is unclear as to the metes and bounds of the term “elastic net penalty” and a review of the specification does not provide any limiting definition that would serve to clarify the scope. While the artisan would generally understand that an “elastic net penalty” generally relates to a term added to a survival model's loss function (e.g. to optimize coefficients), a review of the specification does not provide any limiting definition for the claimed “survival model” such that it is associated with specific equations, functions, or parameters. At best, the specification generically refers to “penalized regression techniques” [0009] and a regression equation [0081]. However, Applicant is reminded that it is improper to import narrowing limitations found in the specification into the claims. See MPEP 2111.01. Accordingly, it remains unclear what computational operations are encompassed by the claimed “applying” step. Clarification is requested via amendment.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims under 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of 35 U.S.C. 103(c) and potential 35 U.S.C. 102(e), (f) or (g) prior art under 35 U.S.C. 103(a).
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103(a) are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 3-5, 7-19, 33, 34, 35 are rejected under 35 U.S.C. 103(a) as being unpatentable over Rubenstein et al. (US2018/0226153A1; Pub. Date: 08/09/2018) in view of Zou et al. (J. R. Statist. Soc. B, 2005, 67, Part 2, pp.301-320), Kamarudin et al. (BMC Medical Research Methodology, 2017,17:53, pp.1-19), and He et al. (IEEE Transactions On Knowledge And Data Engineering, 2009, Vol. 21, No. 9, pp. 1263-84).
Regarding claim(s) 1 and 34, Rubenstein teaches systems and methods for predicting treatment-regimen-related outcomes (e.g., risks of regimen-related toxicities). In particular, Rubenstein teaches obtaining one or more training datasets using a plurality of different types of biological data including non-genetic data (e.g., the clinical data), genetic data (e.g., the selected SNPs and the filtered SNPs), observations, and time [0044, 0063, 0064, Figure 2, Tables 6-18], which are construed as class-imbalanced data sets given the breadth of the claims. Rubenstein teaches method for handling data that include categorization into minority and majority classes [0078, 0044, 0084]. Rubenstein teaches both down-sampling data (e.g., sampling the majority class to create balance with the minority class) and up-sampling data (e.g., selecting additional minority class subjects with replacement to increase the minority class size) during the process of building a model to adjust for model imbalances [0044, 0084], which reads on downsampling as claimed. Rubenstein teaches performing 10-fold cross-validation on the training set data to determine the optimal tuning parameter settings and optimal predictive model [0045, 0085]. Rubenstein teaches a model building process that includes selecting (i.e. generating) a survival model that predicts regimen-related outcomes as a function of health and survival time based on penalized logistic regression, random forests (RF), and C5.0 [0064, 0088, Figure 3], which reasonably suggests generating a survival model based on cross-validation.
Rubenstein does not specifically teach applying an elastic net penalty to the generated survival model, as claimed. However, Rubenstein suggests this feature by using penalized logistic regression for model parameter selection, as set forth above, wherein the penalized regression model is broadly interpreted as a net penalty model absent any limiting definition or specific equations to the contrary.
Moreover, Zou explicitly teaches methods for optimizing model parameters using the elastic net penalty [See entire]. The elastic net is particularly useful when the number of predictors (p) is much bigger than the number of observations (n) [Summary]. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Rubenstein by applying an elastic net penalty, as claimed, since such techniques are routine and conventional in the art of model selection, as taught by Zou. The motivation would have been model optimization when the number of predictors is greater than the number of observations.
Rubenstein does not specifically teach determining an AUC, sensitivity, specificity, and/or C-index of the survival model, wherein AUC, sensitivity, specificity, and/or C-index of the survival model is closer to 1 than a AUC, sensitivity, specificity, and/or C-index of a survival model where the class-imbalanced data set was not downsampled prior to the survival analysis”. However, Rubenstein at a minimum suggests this limitation by selecting final predictive models based on sensitivity and specificity calculations [0052 and Figure 3]. It is additionally noted that limitations directed to intended use or suggestion of steps are not given patentable weight.
That being said, Kamarudin teaches a plurality of different survival functions and methods for calculating a plurality of different model parameters associated with survival including ROC values, sensitivity, and specificity [Table 1]. Alternatively, He et al. (IEEE Transactions On Knowledge And Data Engineering, 2009, Vol. 21, No. 9, pp. 1263-84) teaches methods of analyzing imbalanced data sets that include sampling techniques (up/over and down/under sampling) and ROC calculations, wherein downsampling is associated with specific learning algorithms that attempt to rebalance the data [Section 3.1.2]. He also teaches a kernel-based algorithm that integrates the concepts of cross validation, and area under curve (AUC) and sensitivity evaluation metrics to develop an objective function as a selection mechanism of the most optimal kernel model [Section 3.3.3 and Section 4.2].
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method of Rubenstein by determining an AUC, sensitivity, specificity, and/or C-index of a survival model, wherein AUC, sensitivity, specificity, and/or C-index of the survival model is closer to 1 than a AUC, sensitivity, specificity, and/or C-index of a survival model where the class-imbalanced data set was not downsampled prior to the survival analysis, since Rubenstein suggests generating a survival model using cross-validation techniques, as set forth above, and since AUC, sensitivity, and specificity are routine and conventional performance metrics for evaluating models (using imbalanced data), as taught by Kamarudin and He. In addition, one of ordinary skill in the art would recognize that these performance metrics are variable and thus are easily optimized for model selection. The rationale would have been the predictable use of prior art elements according to their established functions. KSR, 550, U.S. at 417. Additional motivation would have been to perform routine optimization to identify the most accurate model.
Regarding claim(s) 3, 4, 7-19, 33, 35, the combination of Rubenstein, Zou, Kamarudin, and He teaches or suggests all aspects of these claims for the following reasons. Regarding claim(s) 3, 4, Rubenstein teaches class-imbalanced survival data associated with disease [0038, 0044, 0063, 0064, Figure 2, Tables 6-18]. Regarding claim(s) 7, 11, Rubenstein teaches 10-fold cross validation procedures, as set forth above. Regarding claim(s) 8, 9, 10, 33, Rubenstein teaches model features that include clinical factors including medical history, gender, age, ethnicity, and demographic information [0038] as well as proteomic information, transcriptomic information, and metabolomic information [0034]. Kamarudin additionally teaches Cox models throughout [page 9]. Regarding claim(s) 35, Rubenstein teaches calculating risk for patients with cardiac dysfunction [0056], which at a minimum suggestes myocardial infarction.
Regarding claim(s) 12-19, Rubenstein, Zou, He, and Kamarudin do not specifically teach the various claimed percentages of majority class data and minority class data. However, classification percentages are considered results-effective variables that are routinely optimizable. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to have modified the teachings of Rubenstein and Kamarudin by using the various claimed percentages of majority and minority groups with a reasonable expectation of success, since Rubenstein already teaches that changes in the amount of majority and minority groupings directly affects model performance [0044], and since one of skill in the art would recognize that such percentages are variable and could easily be optimized based on the disease under investigation. Additional motivation would have been to perform routine optimization to yield better results, as taught by Rubenstein [0044].
Response to Arguments
Applicant’s arguments have been fully considered but are moot in view of the modified rejection set forth above, which relies on newly apply art.
Cited Prior Art
The following prior art made of record and not relied upon is considered pertinent to applicant' s disclosure.
Khan et al. (Stat Comput (2016) 26:725–741) teaches method for selecting variables for survival data models using elastic net penalties. Khan compares the performance of these approaches with six other variable selection techniques-three are generally used for censored data and the other three are correlation-based greedy methods used for high-dimensional data.
Conclusion
No claims are allowed.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PABLO S WHALEY whose telephone number is (571)272-4425. The examiner can normally be reached between 1pm-9pm EST.
If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Anita Coope can be reached at 571-270-3614. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PABLO S WHALEY/Primary Examiner, Art Unit 3619