Prosecution Insights
Last updated: April 19, 2026
Application No. 17/164,602

SYSTEMS AND METHODS FOR GENERATING A NUTRITIVE PLAN TO MANAGE A UROLOGICAL DISORDER

Non-Final OA §101§103§DP
Filed
Feb 01, 2021
Examiner
ELSHAER, ALAAELDIN M
Art Unit
3687
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Kpn Innovations LLC
OA Round
3 (Non-Final)
36%
Grant Probability
At Risk
3-4
OA Rounds
2y 10m
To Grant
67%
With Interview

Examiner Intelligence

Grants only 36% of cases
36%
Career Allow Rate
74 granted / 208 resolved
-16.4% vs TC avg
Strong +31% interview lift
Without
With
+31.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
37 currently pending
Career history
245
Total Applications
across all art units

Statute-Specific Performance

§101
37.4%
-2.6% vs TC avg
§103
36.7%
-3.3% vs TC avg
§102
5.3%
-34.7% vs TC avg
§112
14.3%
-25.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 208 resolved cases

Office Action

§101 §103 §DP
DETAILED ACTION This office action is based on the claim set submitted and filed on 11/20/2025. Claim 1-2 and 11-12 have been amended. Claims 1-20 are currently pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/20/2025 has been entered. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claims 1-10 are drawn to an apparatus/system and Claim 11-20 is drawn to a method, and each of which is within the four statutory categories (i.e. a machine and a process). Claims 1-20 are further directed to an abstract idea on the grounds set out in detail below. Under Step 2A, Prong 1, the steps of the claim for the invention represents an abstract idea of a series of steps that recite a process for generating a health nutrition plan. This abstract idea could have been performed in human mind but for the fact that the claims recite a general-purpose computer processor to implement the abstract idea for steps citing a process of collecting health data, classify a condition, and provide a nutrition plan for which both the instant claims and the abstract idea are defined as metal process that can be performed using human mind with the aid of pencil and paper. Independent Claim 1 recites, and Claim 11 recites similar steps directed to: “the system comprising a computing device, wherein the computing device is configured to: receive an input comprising physiological data; extract at least one disease marker related to at least one urological disorder; generate a disease marker classifier, wherein generating the disease marker classifier comprises: receiving disease marker training data correlating disease markers related to urological disorders to a urological disorder label; sort the disease marker training data according to one or more categorizations using a natural language processing algorithm, wherein the one or more categories are generated using at least a correlation algorithm identifying classifications within the disease marker training data to determine a classification of the input using feature similarity to analyze how closely out-of-sample-features resemble the disease marker training data training the disease marker classifier using the disease marker training data and the classifications; classify, using the disease marker classifier, the at least disease marker to a urological disorder label; generate a nutritive plan as a function of the urological disorder label”. The limitations, as drafted, given the broadest reasonable interpretation, cover performance of the limitations by a human mind with aid of pen and paper constituting a Metal Process along with Certain methods of Organizing Human activity, thus, an abstract idea, but for the recitation of generic computer components. The claimed concept encompasses to performance of the limitations of a mental process that encompasses the user manually the ability to obtain an individual/population health data/information, extract markers of a disease or a disorder, provide labels and correlate the label to a condition subset and generate a nutrition plan for the labeled condition, which are steps reciting mental process that could have been performed by a human mind with aid of pen and paper but other than the mere nominal recitation of the “computing device”, to implement the abstract idea for performing the steps of observing, evaluating, judgment and opinion citing a process for which can be performed using a human mind with the aid of pencil and paper, see MPEP § 2106.04(a)(2)(III). Accordingly, the claim limitations (in BOLD) recite an abstract idea. Any limitations not identified above as part of the Mental Process are deemed "additional elements," and will be discussed in further detail below. Under Step 2A, Prong 2, this judicial exception is not integrated into a practical application because the remaining elements amount to no more than general purpose computer components programmed to perform the abstract ideas, linking the abstract idea to a particular technological environment. In particular, the claims recite the additional elements such as “computing device, natural language processing” that is/are recited at a high - level of generality to perform the steps of the claim, i.e., “training classifier” that it amounts no more than adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, see MPEP 2106.05(f). As set forth in the 2019 Eligibility Guidance, 84 Fed. Reg. at 55 "merely include[ing] instructions to implement an abstract idea on a computer" is an example of when an abstract idea has not been integrated into a practical application. Accordingly, looking at the claim as a whole, individually and in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea. Under step 2B, the claims do not include additional elements that are sufficient to amount to "significantly more" than the judicial exception because as mentioned above, the additional elements amount to no more than generic computing components, recited at a high level of generality, do not present improvements to another technology or technical field, nor do they affect an improvement to the functioning of the computer itself, that amount to no more than mere instruction to perform the abstract idea such that it amounts no more than adding the words "apply it" (or an equivalent) to apply the exception using generic computer component, see MPEP 2106.05(f). There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation and mere instructions to apply an exception using a generic computer component cannot provide an inventive concept, See Alice, 573 U.S. at 223 ("mere recitation of a generic computer cannot transform a patent-ineligible abstract idea into a patent-eligible invention."). The claims are not patent eligible. Dependent Claims 2-10 and 12-20 include all of the limitations of claim(s) 1 and 11, and therefore likewise incorporate the above-described abstract idea. While the depending claims add additional limitations, such as As for claims 2, 6-8, 12, and 16-18 the claim(s) recite limitations that are under the broadest reasonable interpretation, further define the abstract idea noted in the independent claim(s) that covers performance by a human mind with the aid of pen and paper but for, the recitation of the generic computer components which are similarly rejected because, neither of the claims, further, defined the abstract idea and do not further limit the claim to a practical application or provide an inventive concept such that the claims are subject matter eligible. The claims recite additional elements “machine learning, user device, computing device” that implement the identified abstract idea. These hardware components are recited at a high level of generality (i.e., general purpose computers/components implementing generic computer functions; applicant's specification makes no mention of any specific hardware) to perform the steps, e.g., “train[ing]”, “output[ting]...”, that amounts to no more than the words "apply it" with a computer because it appears to intend to do so, which would still amount to mere instructions to apply the exception using generic computer components, adding insignificant extra-solution activity to the judicial exception, i.e. store[ing], see MPEP 2106.05(d)(g), and OIP Techs, and generally linking the use of the judicial exception to a particular technological environment or field of use, see MPEP 2106.05(h). Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Additionally, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The additional elements amount to more than mere instruction to apply the exception using generic computer component and have been re-evaluated under the “significantly more” analysis. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept ("significantly more"). As for claim 3-5, 9-10, 13-15, and 19-20, the claim(s) recite limitations that are under the broadest reasonable interpretation, further define the abstract idea noted in the independent claim(s) that covers performance by a human mind with the aid of pen and paper but for, the recitation of the generic computer components which are similarly rejected because, neither of the claims, further, defined the abstract idea and do not further limit the claim to a practical application or provide an inventive concept such that the claims are subject matter eligible. Claim Rejections - 35 USC § 103 This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-3, 5, 7-9, 11-13, 15, and 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Bradley et al. (US 2021/0293829 Al -“Bradley”) in view of Kamine (US 2021/0383612 A1) in view of Nankani et al (“Detection Analysis of Various Types of Cancer by Logistic Regression using Machine Learning” “Nankani”) in view of Lisi et al. (US 2021/0034912 A1 – “Lisi”) Regarding Claim 1 (Currently Amended), Bradley teaches a system for generating a nutritive plan to manage a urological disorder, the system comprising a computing device, wherein the computing device is configured to: receive an input comprising physiological data Bradley discloses receiving a sample, e.g., urine, blood, etc., comprising plurality of biomarkers (Bradley: [0009], [0074], [0103], [0326]; extract at least one disease marker related to at least one urological disorder; Bradley discloses the received sample comprising plurality of biomarkers such as urine protein, sodium, urine specific gravity, urine pH, WBC, RBC, etc., associated with a chronic kidney disease (CKD) (Bradley: [0009], [0093], [0103], [0191], [0325]; generate a disease marker classifier, Bradley discloses using the plurality of biomarkers related to a chronic kidney disease (CKD) to generate classification algorithm as a classifier to determine classification of the disease (Bradley: [0009], [0095], [0104]); wherein generating the disease marker classifier comprises: receiving disease marker training data correlating disease markers related to urological disorders to a urological disorder label Bradley discloses training data set comprising plurality of biomarkers related to a chronic kidney disease (CKD) and derive classification label of the disease (Bradley: [0009], [0104], [0111]) training the disease marker classifier using the sorted disease marker training data and the classification Bradley discloses the training dataset comprising plurality of biomarkers related to a chronic kidney disease (CKD) to train the classification algorithm/classifier using a machine learning algorithm(s) (Bradley: [0009], [0116], [0124-0125], [0132]); classify, using the disease marker classifier, the at least disease marker to a urological disorder label; Bradley discloses classing biomarkers related to a chronic kidney disease (CKD) (Bradley: [0009], [0105-0106], [0117]) generate a nutritive plan as a function of the urological disorder label Bradley discloses customizing a recommendation for a dietary regimen to treat or prevent CKD (Bradley: [0010], [0026], [0071], [0107], [0120], [0147], [0156]). Bradley teaches inputting biomarkers training data and classification algorithm classifying biomarkers dividing training dataset into subsets for different prediction models for different categories of disease risk and filtering the training dataset where the classification algorithm comprises an algorithm to include, a logistic regression algorithm, an artificial neural network algorithm (ANN), a recurrent neural network algorithm (RNN), a K-nearest neighbor algorithm (KNN) [0112], [0122-0127], [0131-0133], however Bradley does not expressly disclose: sort the disease marker training data according to one or more categorizations using a natural language processing algorithm (NLP), wherein the one or more categories are generated using at least a correlation algorithm identifying classifications within the disease marker training data to determine a classification of the input using feature similarity to analyze how closely out-of-sample-features resemble the disease marker training data Kamine teaches sort the disease marker training data according to one or more categorizations using a natural language processing algorithm, wherein the one or more categories are generated using at least a correlation algorithm (Kamine: [0031], [0039]) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have Bradley disclosing classification of a urological disease biomarkers in a training data to incorporate sorting the training data into categorization or classification using NLP, as taught by Kamine which help modeling relationships between two or more categories of data elements (Kamine: [0035]). Nankani teaches identifying classifications within the disease marker training data to determine a classification of the input using feature similarity to analyze how closely out-of-sample-features resemble the disease marker training data Nankani discloses detection and analysis of a disease/cancer and classification of cancer using algorithms such as K nearest Neighbor (KNN) Classification whereas the KNN algorithm works on the concept of feature similarity that is how closely out-of-sample features resemble training set determines how to classify a given data point (Nankani: [p. 100, col 2], [p. 101 col. 1-2]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have Bradley disclosing classification of a urological disease biomarkers in a training data to incorporate determine a classification of the input using feature similarity to analyze how closely out-of-sample-features, as taught by Nankani which help training on a small data set and gives a result very fast (Nankani: [p. 101, col. 1]). Lisi discloses classifier as a biomarker identifying features relevant to disease label (Lisi: [0062], [0075], [0083], [0162], [0237]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have Bradley disclosing classification of a urological disease biomarkers and labels and incorporate a correlation of biomarker to the label, as taught by Lisi which help improve the efficiency or accuracy of the biomarker (Lisi: [0062]). Regarding Claim 2 (Currently Amended), the combination of Bradley, Kamine, Nankani, and Lisi teaches the system of claim 1, wherein the computing device is further configured to: receive disease predictor training data, wherein the disease predictor training data correlates disease markers related to urological disorders and urological disorder labels with disease predictor scores; Bradley discloses receiving a biomarker as predictor for predicting a risk of chronic kidney disease (CKD) to derive probability scores or a classification label (Bradley: [0020-0021], [0092], [0110], [0116]) wherein the disease predictor training data is received from one or more past iterations of a previous predictor training data vectors Bradley discloses the classification algorithm is trained using KNN with dynamic time warping (DTW) and using stratified subsets of a training dataset to create a predictor after various time periods of a visit that predict a risk of developing CKD during which an amount of one or more biomarkers is determined (Bradley: [0098], [0132-0133]) train, using the disease predictor training data, a machine-learning process Bradley discloses the training dataset comprising plurality of biomarkers related to a chronic kidney disease (CKD) to train the classification algorithm/classifier using a machine learning algorithm(s) (Bradley: [0111]); generate, for each disease marker of a plurality of disease markers, a disease predictor score as a function of the machine-learning process and each respective disease marker of the plurality of disease markers (Bradley: [0009], [0022], [0116]). Regarding Claim 3, the combination of Bradley, Kamine, Nankani, and Lisi teaches the system of claim 2, wherein the computing device is further configured to: identify a disease marker of the plurality of disease markers having a highest disease predictor score Bradley discloses using range level of biomarkers as reference values and identifying upper limit of each biomarker, i.e., identifying creatinine level (Bradley: [0010], [0030], [0062], [0075]) ; generate the nutritive plan as a function of the identification Bradley discloses generating a dietary regimen based on monitored biomarker output (Bradley: [0010], [0071], [0107]). Regarding Claim 5, the combination of Bradley, Kamine, Nankani, and Lisi teaches the system of claim 1, wherein the at least one disease marker comprises a diagnostic disease marker Bradley discloses the received sample comprising plurality of biomarkers such as urine protein, sodium, urine specific gravity, urine pH, WBC, RBC, etc., associated with a chronic kidney disease (CKD) (Bradley: [0009], [0093], [0103], [0191], [0325]). Regarding Claim 7, the combination of Bradley, Kamine, Nankani, and Lisi teaches the system of claim 6, wherein outputting the nutritive plan further comprises outputting a message independent of a presence of the nutritive plan (Bradly: [0168]). Regarding Claim 8, the combination of Bradley, Kamine, Nankani, and Lisi teaches the system of claim 1, wherein the computing device is further configured to output the nutritive plan to a user device Bradley discloses display the customized recommendation on a graphical user interface (Bradley: [0011], [0092]). Regarding Claim 9, the combination of Bradley, Kamine, Nankani, and Lisi teaches the system of claim 1, wherein the nutritive plan manages a plurality of disorders Bradley discloses a dietary regimen for treatment of CKD diseases (Bradley: [0063], [0071], [0120]). Regarding Claim 11, Bradley teaches a method for generating a nutritive plan to manage a urological disorder, the method comprising: The motivations to combine the above-mentioned references are discussed in the rejection of claim 1, and incorporated herein. Regarding Claim 12-13, 15 and 17-19, the claims recite substantially similar limitations to claim 2-3, 5, and 7-9, as such, are rejected for similar reasons as given above. Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Bradley et al. (US 2021/0293829 Al -“Bradley”) in view of Kamine (US 2021/0383612 A1) in view of Nankani et al (“Detection Analysis of Various Types of Cancer by Logistic Regression using Machine Learning” “Nankani”) in view of Lisi et al. (US 2021/0034912 A1 – “Lisi”) in view of Brewer et al . (US 2021/0233611 A1- “Brewer”) Regarding Claim 4, the combination of Bradley, Kamine, Nankani, and Lisi teaches the system of claim 1, wherein the physiological data includes results of a prostate-specific antigen test Bradley does not disclose a prostate-specific antigen test. Brewer discloses testing or screening prostate cancer comprising biomarker such as monitored levels of prostate specific antigen (PSA) levels for detecting prostate cancer (Brewer: [0216-0217], [0256], [0352]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have Bradley disclosing classification of a urological disease biomarkers and labels and incorporate a prostate-specific antigen biomarker, as taught by Brewer which help determining the existence of the different cancer populations and assists in the targeting of therapy and helping avoid treatment-associated morbidity in men with indolent disease (Brewer: [0011], [0256]). Regarding Claim 14, the claim recites substantially similar limitations to claim 4, as such, are rejected for similar reasons as given above. Claims 6, 10, 16, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Bradley et al. (US 2021/0293829 Al -“Bradley”) in view of Kamine (US 2021/0383612 A1) in view of Nankani et al (“Detection Analysis of Various Types of Cancer by Logistic Regression using Machine Learning” “Nankani”) in view of Lisi et al. (US 2021/0034912 A1 – “Lisi”) in view of Avery et al . (US 2021/0241881A1- “Avery”) Regarding Claim 6, the combination of Bradley, Kamine, Nankani, and Lisi teaches the system of claim 1, wherein generating the nutritive plan further comprises: receiving nutritive plan training data, wherein the nutritive plan training data correlates nutritive plans to nutritive plans with a historical ameliorative or preventive effect on urological disorders; Bradley discloses a treatment or regiment with amount of substance that found beneficial and effective to reduce risk of CKD (Bradley: [0063]) However, Bradley does not expressly disclose: training machine learning using training data correlates nutritive plan to nutritive plan with historical effect on a disorder and update the plan. Avery teaches receiving nutritive plan training data, wherein the nutritive plan training data correlates nutritive plans to nutritive plans with a historical ameliorative or preventive effect on urological disorders; Avery discloses a nutrition therapy diet plan for a user and comparing a selected plan to a historical effectiveness of the selected plan (Avery: [0055-0056], [0062], [0088], [0116-0117]) training a machine-learning process using the nutritive plan training data; (Avery: [0055], [0062], [0118]) outputting the nutritive plan as a function of the urological disorder and the machine- learning process Avery discloses adjusting and presenting diet plan based on needs of the user [0086], [0092], [0121]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have Bradley disclosing identifying effective treatment and incorporate comparing treatment to past outcomes and using the treatment data to train ML and provide output, as taught by Avery which help improving patient outcomes (Avery: [0118]). Regarding Claim 10, the combination of Bradley, Kamine, Nankani, and Lisi teaches the system of claim 1, wherein the computing device is further configured to: receive a second input Bradley discloses receiving a new data comprising biomarkers (Bradley: [0238]); reclassify the at least one disease marker from the second input to a urological disorder label Bradley discloses using the new data to re-adjust initial logistic model to classify subjects’ condition (Bradley: [0257]); However, Bradley does not expressly disclose update the plan based on the addition or new input. Avery teaches update the nutritive plan as a function of the second input Avery discloses updating patient diet parameters based on adjusted needs or new information provided as an input (Avery: [0092], [0094]). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have Bradley disclosing identifying effective treatment and incorporate updating treatment plan, as taught by Avery which help improving patient outcomes (Avery: [0118]). Regarding Claim 16 and 20, the claims recite substantially similar limitations to claim 6 and 10, as such, are rejected for similar reasons as given above. Response to Amendment Applicant's arguments filed 11/20/2025 have been fully considered by the Examiner and addressed as the following: In the remarks, Applicant argues in substance that: Applicant remarks with respect to Double Patenting Rejection on page 6. In responses to the Applicant argument and claim amendment, Examiner withdraws the DP rejection. Applicant's arguments with respect to the 35 U.S.C. § 101 rejection on page 7-14. On page 8-9 of the remarks, the Applicant argues “Claim l as amended recites a process to... The process, just like in Synopsys, except in its most simplistic form, could not conceivably be performed in the human mind or with pencil and paper because the "classification method that utilizes feature similarity to analyze how closely out-of-sample-features resemble training data to classify input data to one or more clusters and/or categories of features as represented in training data; this may be performed by representing both training data and input data in vector forms, and using one or more measures of vector similarity to identify classifications within training data, and to determine a classification of input data, Paragraph [0059] of the instant Specification”, Examiner respectfully disagree. First, in Synopsys, the claim(s) is/are directed to data encryption and manipulation for computer communication which is a process that cannot be practically performed in the human mind. In contrast the instant claim(s) is/are not similar to Synopsys citing, under BRI, a process for creating a disease classifier by collecting and sorting the training data and train the classifier to classify disease marker to urological disorder by evaluating disease markers and classifying the disease marker to disease label while citing steps for performing the classification process that can be performed by a human mind. For example, the Applicant pointed to specification [0059] “generate a classifier using a K-nearest neighbors (KNN) algorithm. A "K-nearest neighbors algorithm" as used in this disclosure, includes a classification method that utilizes feature similarity to analyze how closely out-of-sample- features resemble training data to classify ...” while the human mind performs tasks analogous to the K-Nearest Neighbors (KNN) algorithm, especially in recognizing patterns, classifying new information, and making judgments based on similarity to past experiences (neighbors) finding the closest matches of similarity in memory for classification. As such, Examiner asserts that the claim do/does not describe any technical steps such that a machine operation process of creating and receiving training data in a manner that cannot be performed by a human. Moreover, while the Applicant argues on page 9 of the remarks that “The August 2025 Memo”, Examiner finds that the Applicant instant claim(s) recites, under BRI, additional elements such as processor, NLP, machine learning, that is/are described at a high level of generality and as a tool to implement the abstract idea. As mentioned in the above rejection that the identified additional elements has/have been analyzed under Step 2A P2 as recited at a high level of generality recited as a tool to perform the abstract idea steps “e.g., train[ing]...”, that amounts to no more than adding the words "apply it" (or an equivalent) to apply the exception using generic computer component, see MPEP 2106.05(f). the claim does not positively recite who this process to the claim limitations. As explained in the prior response to argument, Examiner is encouraging the Applicant to demonstrate HOW such process is applied to the claim steps to output the claimed results rather than arguing that the specification includes a high level of algorithms training method. Even when considering the claims additional elements (e.g., processor, natural language processor), the claims as a whole, individually and in combination, provide no integration of the abstract ideas into a practical application that no meaningful limits on practicing the abstract idea are introduced, see MPEP 2106. The claims as a whole are therefore directed to an abstract idea. Furthermore, Applicant argues that Example 39 recites limitation for “training neural network in a first stage using the first training set” while the claim was found not reciting a judicial exception not because using a training data set to train a neural network, rather the neural network is trained with this (first) training set and images that are mischaracterized after this first training are then used to retrain the neural network again as such expanded training set is developed by applying transformation functions on an acquired set of facial images and these transformations can include affine transformations to detect faces in distorted images while limiting the number of false positives. The system is retrained with an updated or second training set containing the false positives produced after face detection has been performed on a set of non-facial images. In contrast, the claim as amended provides no description of how a machine learning model is trained to provide an improved neural network but only disclosing the combination of classifying disease marker by training a classifier using disease markers. On page 10 of the remarks, the Applicant argues “claims 2 and 12 as amended recite the limitation ''wherein the disease predictor training data is received from one or more past iterations of a previous predictor training data vectors.; training, using the disease predictor training data, a machine-learning process, which do not recite a judicial exception”, Examiner respectfully disagree. Examiner finds that the recitation of “training data is received from one or more past iterations of a previous predictor training data vectors” is describing, under BRI, receiving training data but do not describe how the past iterations are influencing the disease predicator training/training data rather than describing that the disease predictor training data received in arbitrary form such as can be received from remote devices, from past iterations, etc., see (Applicant PGPub [0069]). On page 12 of the remarks, the Applicant argues “Analogous to Example 47 and claims 1 and 3, in the present application, claim 1 as amended also teaches technological improvement that is integrated into a practical application. For instance, paragraph [0008] states "[a] practical application of this technology includes the use of a machine-learning process to provide a user access to nutritive plans that may improve and/or relieve symptoms related to urological disorder. The systems and methods allow for an update of the comestible plan if the urological disorder does not improve." In addition, paragraph [0059] describes the technical aspects of the improvement, namely, "a classification method that utilizes feature similarity to analyze how closely out-of-sample-features...”, Examiner respectfully disagree. As explained by the Examiner in the prior response to argument dated 5/20/2025, while Example 47 claim 1 was found ineligible because the claim does not recite a judicial exception, however claim 3 was found eligible because the claim recites limitations “e.g., (d)-(f)” that cannot be practically performed in the human mind. In addition, Steps (d)-(f) demonstrated improvement in the technical field of network intrusion detection. Accordingly, Example 47 claim 1 and claim 3 as a whole integrates the judicial exception into a practical application by improving network security. In contrast, the instant claim 1 does not describe an improvement to a technology or technical filed rather improving the abstract idea of classifying a disease marker while describing at a high level a training of classifier. This is clearly in the Applicant specification [0008] describing a use of machine learning at a high level of generality and as a tool to perform the abstract idea where the improvement identified is “access to nutritive plans that may improve and/or relieve symptoms related to urological disorder”. Similarly, the specification [0059] describes a process of using KNN algorithm to analyze closely out-of-sample- features for resembling training data which is a process, as explained above, that can be performed by a human mind as such identified as part of the abstract idea. On page 12-13 of the remarks, the Applicant argues “Although the 2B analysis by the Office is moot in light of the amendments to claim 1 and the arguments above ... In the present application, claim 1 as amended recites details of a particular way to generate a disease marker classifier by... claim 1 as amended purports to improve existing systems and methods by integrating specific training features such as manipulation and classification of the disease marker training data before training the disease marker classifier as recited in amended claims 1 and 11...”, Examiner respectfully disagree. In light of the rejection and response to argument above, the claim(s) limitations do not improve any technological field such as collecting, analyzing, and classifying disease marker and classify the markers to a disease label which are steps, under BRI, recite the abstract idea identified. There is no specific process other than what is understood in classifying disease marker to a disease label using a classifier training data and NLP, processor and machine learning process recited at a high level of generality but do not recite using datasets to train machine learning model as such leveraging generic computing functionality. The claims at issue do not require any nonconventional computer, network, or other components, or even a non-conventional and non-generic arrangement of known, conventional piece, but merely call for performance of the claimed functions on a set of generic computer components Therefore, the Examiner has addressed the Applicant argument(s) and found this argument is not found to be persuasive. Hence, Examiner remains the 101 rejections of claims which have been updated to address Applicant's amendments. Applicant's arguments with respect to the 35 U.S.C. § 103 rejection on page 14-17. On page 15-17 of the remarks, the Applicant argues that the reference Bradley, Kamine, and Lisi, Brewer, and Avery along and/or in combination fail to teach the amended limitation “identifying classifications within the disease marker training data to determine a classification of the input using feature similarity to analyze how closely out-of-sample-features resemble the disease marker training data; and training the disease marker classifier using the sorted disease marker training data and the classifications.” Examiner respectfully finds the Applicant Argument(s) is/are directed to a newly added feature. Examiner has introduced a new reference “Nankani” teaches the argued limitation. Therefore, Examiner finds the Applicant argument is moot. Prior Art Cited but not Applied The following document(s) were found relevant to the disclosure but not applied: Hieu Tran "A SURVEY OF MACHINE LEARNING AND DATA MINING TECHNIQUES USED IN MULTIMEDIA SYSTEM" discloses algorithms such as KNN for finding the most similar data points in the training data, and making an educated guess based on their classifications and how closely out-of-sample features resemble training set determines how to classify a given data point. The references are relevant since it discloses analyzing a training data and classification. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALAAELDIN ELSHAER whose telephone number is (571)272-8284. The examiner can normally be reached M-Th 8:30-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, MAMON OBEID can be reached at Mamon.Obeid@USPTO.GOV. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ALAAELDIN M. ELSHAER/Primary Examiner, Art Unit 3687
Read full office action

Prosecution Timeline

Feb 01, 2021
Application Filed
Dec 19, 2024
Non-Final Rejection — §101, §103, §DP
Apr 03, 2025
Interview Requested
Apr 16, 2025
Examiner Interview Summary
Apr 16, 2025
Applicant Interview (Telephonic)
Apr 28, 2025
Response Filed
May 16, 2025
Final Rejection — §101, §103, §DP
Nov 20, 2025
Request for Continued Examination
Dec 05, 2025
Response after Non-Final Action
Dec 17, 2025
Non-Final Rejection — §101, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592315
APPARATUS, SYSTEM, METHOD, AND COMPUTER-READABLE RECORDING MEDIUM FOR DISPLAYING TRANSPORT INDICATORS ON A PHYSIOLOGICAL MONITORING DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12537083
SYSTEMS AND METHODS FOR REGULATING PROVISION OF MESSAGES WITH CONTENT FROM DISPARATE SOURCES BASED ON RISK AND FEEDBACK DATA
2y 5m to grant Granted Jan 27, 2026
Patent 12525337
METHOD AND APPARATUS FOR SELECTING MEDICAL DATA FOR ANNOTATION
2y 5m to grant Granted Jan 13, 2026
Patent 12499999
SYSTEMS AND METHODS FOR TARGETED MEDICAL DOCUMENT REVIEW
2y 5m to grant Granted Dec 16, 2025
Patent 12424338
TRANSFER LEARNING TECHNIQUES FOR USING PREDICTIVE DIAGNOSIS MACHINE LEARNING MODELS TO GENERATE TELEHEALTH VISIT RECOMMENDATION SCORES
2y 5m to grant Granted Sep 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
36%
Grant Probability
67%
With Interview (+31.3%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 208 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month