Prosecution Insights
Last updated: April 19, 2026
Application No. 17/623,555

CHRONIC DISEASE PREDICTION SYSTEM BASED ON MULTI-TASK LEARNING MODEL

Non-Final OA §101§102§103§112
Filed
Dec 28, 2021
Examiner
SKOWRONEK, KARLHEINZ R
Art Unit
1687
Tech Center
1600 — Biotechnology & Organic Chemistry
Assignee
ZHEJIANG UNIVERSITY
OA Round
1 (Non-Final)
22%
Grant Probability
At Risk
1-2
OA Rounds
4y 9m
To Grant
57%
With Interview

Examiner Intelligence

Grants only 22% of cases
22%
Career Allow Rate
56 granted / 256 resolved
-38.1% vs TC avg
Strong +35% interview lift
Without
With
+35.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 9m
Avg Prosecution
13 currently pending
Career history
269
Total Applications
across all art units

Statute-Specific Performance

§101
25.1%
-14.9% vs TC avg
§103
31.8%
-8.2% vs TC avg
§102
12.5%
-27.5% vs TC avg
§112
23.3%
-16.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 256 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims Claims 1-9 are pending. Claims 1-9 are rejected. Priority This application is a 371 of PCT Application No. PCT/CN2020/12842, filed 12 November 2020, which claims priority to Chinese Foreign Patent Application No. CN201911317824.0, filed 19 December 2019. Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement (IDS) submitted on 28 December 2021 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement has been considered by the examiner. Drawings The drawings received 28 December 2021 have been accepted by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 3 and further dependent claims 4-9 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 3 recites the limitation “the training process of the chronic disease prediction model.” There is insufficient antecedent basis for this limitation in the claim. Additionally, it is unclear at what step the training process is occurring in the independent claim. The following is a quotation of 35 U.S.C. 112(d): (d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph: Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. Claim 3 and further dependent claims 4-9 are rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. Claim 3 fails to further limit the subject matter of claim 1 of which it depends for the following reason: Claim 1 is directed to a chronic disease prediction system comprising a computer memory, a computer processor and a computer program stored in the computer memory and executable on the computer processor, wherein a trained chronic disease prediction model is stored in the computer memory. The limitation of “wherein a trained chronic disease prediction model is stored in the computer memory” indicates that a pretrained chronic disease prediction model is stored in the system’s memory and is only implemented in claim 1 by inputting preprocessed data into the model and using the model to perform feature extraction and prediction of chronic disease. However, claim 3 is directed to establishing and training a chronic disease prediction model. The limitations of training a chronic disease prediction model comprising: acquiring and labeling data; designing a data coding method; establishing a chronic disease prediction model; and training the chronic disease prediction model attempt to broaden the scope of claim 1 because claim 1 is only directed to the implementation of a pretrained chronic disease prediction model stored in the memory of a chronic disease prediction system. Thus, claim 3 and its dependent claims fail to further limit claim 1. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-9 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite: (a) mathematical concepts, (e.g., mathematical relationships, formulas or equations, mathematical calculations); and (b) mental processes, i.e., concepts performed in the human mind, (e.g., observation, evaluation, judgement, opinion). Subject matter eligibility evaluation in accordance with MPEP 2106: Eligibility Step 1: Claims 1-9 are directed to a system (machine or manufacture) for implementing a chronic disease prediction model. Therefore, these claims are encompassed by the categories of statutory subject matter, and thus, satisfy the subject matter eligibility requirements under step 1 [Step 1: YES] Eligibility Step 2A: First it is determined in Prong One whether a claim recites a judicial exception, and if so, then it is determined in Prong Two whether the recited judicial exception is integrated into a practical application of that exception. Eligibility Step 2A Prong One: In determining whether a claim is directed to a judicial exception, examination is performed that analyzes whether the claim recites a judicial exception, i.e., whether a law of nature, natural phenomenon, or abstract idea is set forth or described in the claim. Independent claim 1 recites the following steps which fall within the mental processes and/or mathematical concepts groupings of abstract ideas: a multi-task learning model (i.e., mathematical concepts); implementing a trained chronic disease prediction model composed of a shared layer convolutional neural network and a plurality of chronic disease branch networks (i.e., mathematical concepts); preprocessing a to-be-predicted physical examination record (i.e., mathematical concepts); performing feature extraction and prediction (i.e., mathematical concepts). Dependent claims 2-9 further recite the following steps which fall within the mental processes and/or mathematical concepts groupings of abstract ideas, as noted below. Dependent claim 2 further recites: feature extraction is performed by using 3 and 6 convolutional cores with a size of 3*3, and a step length of the convolutional core is set as 1 (i.e., mathematical concepts); each chronic disease branch network is provided with 2 convolutional layers (i.e., mathematical concepts); feature extraction is performed on each convolutional layer by 9 and 12 convolutional layers, and step lengths of the convolutional layers are designed as 2 and 1 (i.e., mathematical concepts); each branch sequentially passes through two full-connection layers with a node number of 32 and one softmax layer to obtain a final output (i.e., mathematical concepts). Dependent claim 3 further recites: labeling the sample data after preprocessing (i.e., mental processes and mathematical concepts); dividing the labeled sample data into a training set and a validation set by a five-fold cross validation method (i.e., mathematical concepts); designing a data coding method for structured data in physical examination data to acquire input data of the chronic disease prediction data (i.e., mental processes and mathematical concepts); using a content coding strategy to unify value types of data (i.e., mental processes and mathematical concepts); using a spatial coding strategy to unify data formats the input type (i.e., mental processes and mathematical concepts); establishing a multi-task learning-based chronic disease prediction model (i.e., mathematical concepts); performing feature extraction and classification on the coded structured data by a deep learning method (i.e., mathematical concepts); training the chronic prediction model by the training set (i.e., mathematical concepts). adjusting parameters of the model according to the prediction result of the model and the coincidence degree of the label until the model converges (i.e., mathematical concepts). Dependent claim 4 further recites: performing correlation analysis and missing value counting on various indexes in the physical examination data (i.e., mental processes and mathematical concepts); eliminating data with missing values in a single record exceeding a certain ratio from the perspective of physical examination records (i.e., mental processes and mathematical concepts); eliminating data indexes with missing values in all the records exceeding a certain ratio from the perspective of data indexes (i.e., mental processes and mathematical concepts); grouping according to ages (i.e., mental processes and mathematical concepts); performing missing value filling on missing data in the physical examination records (i.e., mental processes and mathematical concepts). Dependent claim 5 further recites: randomly dividing the sample data into five parts without repeated sampling, the number of each part of data samples being equal or close (i.e., mental processes and mathematical concepts); selecting one part as a test set at each time and the remaining four parts as the training set for model training, and repeating five times to make five different training set and validation set groups (i.e., mental processes and mathematical concepts). Dependent claim 6 further recites: coding text information in the physical examination record into numerical information by a label coding mode (i.e., mathematical concepts); coding text information in the physical examination record into numerical information by a one-hot coding mode to serve as input (i.e., mathematical concepts). Dependent claim 7 further recites: analyzing a correlation between any two of all variables in a one-dimensional vector, wherein the physical examination record after content coding is the one-dimensional vector (i.e., mathematical concepts); sorting in a descending order according to the sum of correlations between a certain variable and all other variables (i.e., mathematical concepts); sequentially sorting all the variables after the descending sort to form a two-dimensional vector to serve as input data of a network (i.e., mathematical concepts). Dependent claim 8 further recites: comparing the output prediction result with a label corresponding to data (i.e., mathematical concepts); applying an ACC function as loss of a current model and returning to the model (i.e., mathematical concepts); updating parameters in the model (i.e., mathematical concepts); when reaching a set ACC threshold or a specified number of iterations, stopping updating the model (i.e., mathematical concepts); training until the model converges (i.e., mathematical concepts). Dependent claim 9 further recites: averaging loss values obtained by all the validation sets to serve as performance assessment of the model for finding an optimal parameter (i.e., mathematical concepts). The abstract ideas recited in the claims are evaluated under the broadest reasonable interpretation (BRI) of the claim limitations when read in light of and consistent with the specification. As noted in the foregoing section, the claims are determined to contain limitations that can practically be performed in the human mind with the aid of a pencil and paper, and therefore recite judicial exceptions from the mental process grouping of abstract ideas. Additionally, the recited limitations that are identified as judicial exceptions from the mathematical concepts grouping (e.g. “the preprocessing comprises: performing correlation analysis and missing value counting on various indexes in the physical examination data…” at para. [0021] of the Specification and FIG. 1) of abstract ideas are abstract ideas irrespective of whether or not the limitations are practical to perform in the human mind. Therefore, claims 1-9 recite an abstract idea. [Step 2A Prong One: YES] Eligibility Step 2A Prong Two: In determining whether a claim is directed to a judicial exception, further examination is performed that analyzes if the claim recites additional elements that when examined as a whole integrates the judicial exception(s) into a practical application (MPEP 2106.04(d)). A claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception. The claimed additional elements are analyzed to determine if the abstract idea is integrated into a practical application (MPEP 2106.04(d)(I); MPEP 2106.05(a-h)). If the claim contains no additional elements beyond the abstract idea, the claim fails to integrate the abstract idea into a practical application (MPEP 2106.04(d)(III)). The judicial exceptions identified in Eligibility Step 2A Prong One are not integrated into a practical application because of the reasons noted below. Dependent claims 2, and 4-7 do not recite any elements in addition to the judicial exception, and thus are part of the judicial exception. The additional elements in independent claim 1 include: a computer memory; a computer processor; a computer program; inputting the record into the shared layer convolutional neural network of the chronic disease prediction model (i.e. inputting data); inputting the obtained feature map into each chronic disease branch network (i.e. inputting data); obtain a chronic disease prediction result (i.e. obtaining data). The additional elements in dependent claims 3, 8, and 9 include: acquiring chronic disease examination related physical examination data as sample data (i.e., acquiring data) (claim 3); outputting prediction results of various chronic diseases at the same time (i.e., outputting data) (claim 3); inputting training sets (i.e. inputting data) (claim 8); outputting a prediction result (i.e., outputting data) (claim 8); inputting validation sets (i.e., inputting data) (claim 9); obtain a corresponding classification result (i.e., obtaining data) (claim 9). The additional elements of a computer memory, a computer processor, and a computer program in claim 1 are not an improvement to computer functionality itself, or an improvement to any other technology or technical field (see MPEP 2106.04(d)(1)). The additional elements of acquiring data in claim 3; inputting data in claims 1, 8, and 9; obtaining data in claims 1 and 9; and outputting data in claims 3 and 8; are insignificant extra-solution activities that are part of the data gathering process used in the recited judicial exceptions (see MPEP 2106.05(g)). Thus, the additionally recited elements merely invoke a computer as a tool, and/or amount to insignificant extra-solution data gathering activity, and as such, when all limitations in claims 1-9 have been considered as a whole, the claims are deemed to not recite any additional elements that would integrate a judicial exception into a practical application, and therefore claims 1-9 are directed to an abstract idea (MPEP 2106.04(d)). [Step 2A Prong Two: NO] Eligibility Step 2B: Because the claims recite an abstract idea, and do not integrate that abstract idea into a practical application, the claims are probed for a specific inventive concept. The judicial exception alone cannot provide that inventive concept or practical application (MPEP 2106.05). Identifying whether the additional elements beyond the abstract idea amount to such an inventive concept requires considering the additional elements individually and in combination to determine if they amount to significantly more than the judicial exception (MPEP 2106.05A i-vi). The claims do not include any additional elements that are sufficient to amount to significantly more than the judicial exception(s) because of the reasons noted below. Dependent claims 2, and 4-7 do not recite any elements in addition to the judicial exception(s). The additional elements recited in independent claim 1 and dependent claims 3, 8, and 9 are identified above, and carried over from Step 2A: Prong Two along with their conclusions for analysis at Step 2B. Any additional element or combination of elements that was considered to be insignificant extra-solution activity at Step 2A: Prong Two was re-evaluated at Step 2B, because if such re-evaluation finds that the element is unconventional or otherwise more than what is well-understood, routine, conventional activity in the field, this finding may indicate that the additional element is no longer considered to be insignificant; and all additional elements and combination of elements were evaluated to determine whether any additional elements or combination of elements are other than what is well-understood, routine, conventional activity in the field, or simply append well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, per MPEP 2106.05(d). The additional elements of a computer memory, a computer processor, and a computer program in claim 1; acquiring data in claim 3; inputting data in claims 1, 8, and 9; obtaining data in claims 1 and 9; and outputting data in claims 3 and 8; are conventional (see MPEP at 2106.05(b) and 2106.05(d)(II) regarding conventionality of computer components and computer processes). Therefore, when taken alone, all additional elements in claims 1, 3, 8, and 9 do not amount to significantly more than the above-identified judicial exception(s). Even when evaluated as a combination, the additional elements fail to transform the exception(s) into a patent-eligible application of that exception. Thus, claims 1-9 are deemed to not contribute an inventive concept, i.e., amount to significantly more than the judicial exception(s) (MPEP 2106.05(II)). [Step 2B: NO] Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-3, and 5-6 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Alawad et al. (2018 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI), Las Vegas, NV, USA, 2018, pp. 218-221). Regarding independent claim 1, Alawad et al. teaches a chronic disease prediction system based on a multi-task learning model that inherently comprises a computer memory, a computer processor and a computer program, and the chronic disease prediction model is composed of a shared layer convolutional neural network and a plurality of chronic disease branch networks (Alawad et al. Title: Coarse-to-Fine Multi-Task Training of Convolutional Neural Networks for Automated Information Extraction from Cancer Pathology Reports; Abstract: investigated an automated approach using a coarse-to-fine training of convolutional neural networks (CNNs) for extracting the primary site, histological grade and laterality from unstructured cancer pathology text reports; the multi-task learning (MTL) with hard parameter sharing approach is used to train a multi-task MT-CNN model for all the tasks. Then, the TM-CNN model parameters are used to initialize a CNN model for each task to be fine trained individually using its corresponding dataset); and implements the following steps: preprocessing a to-be-predicted physical examination record and then inputting the record into the shared layer convolutional neural network of the chronic disease prediction model for feature extraction to obtain a feature map (Alawad et al. Section II. 4. Pre-Processing; Section III. A. Convolutional Neural Networks: Convolution layer generates feature maps which are the representation of every context window over the document matrix… Max pooling layer captures the most important features by taking the max value from each feature map as the extracted feature from a particular filter; Section III. B. Coarse-to-Fine Training of CNNs: In the first stage, Alawad et al. use the MTL scheme with hard parameter sharing approach to train a MT-CNN model to learn the shared features. The network architecture consists of a shared convolutional layer, as the one presented in the previous subsection); and inputting the obtained feature map into each chronic disease branch network and performing feature extraction and prediction respectively to obtain a chronic disease prediction result (Alawad et al. Section III. B. Coarse-to-Fine Training of CNNs: The diagram of the multi-task CNN used in this paper is shown in Figure 2. In the second stage, the MT-CNN parameters, obtained from the first training stage, are used to initialize a CNN model for each individual task. Each CNN model has the same shared layer structure, the box shown in Figure 2, and keeps only its corresponding fully connected layer. Then, the complete set of cases for each task is used for task-wise fine training its corresponding CNN network. At the end, Alawad et al. will have three different CNN models, one for each task). Regarding dependent claim 2, Alawad et al. teaches a structure of the shared layer convolutional neural network is as follows: firstly, through a multi-layer task shared convolutional layer, feature extraction is performed by using 3 and 6 convolutional cores with a size of 3*3, and a step length of the convolutional core is set as 1 (Alawad et al. Fig. 2. Architecture diagram of a multi-task CNN); each chronic disease branch network is provided with 2 convolutional layers respectively, feature extraction is performed on each convolutional layer by 9 and 12 convolutional layers respectively, and step lengths of the convolutional layers are designed as 2 and 1 respectively; and finally, each branch sequentially passes through two full-connection layers with a node number of 32 and one softmax layer to obtain a final output (Alawad et al. Fig. 2. Architecture diagram of a multi-task CNN; Section III. A. Convolutional Neural Networks: The output is connected to a soft-max fully connected layer to produce a rank for each label… The window sizes l of the convolutional filters are 3,4, and 5 with 100 feature maps each). Regarding dependent claim 3, Usama et al. teaches the training process of the chronic disease prediction model is as follows: acquiring chronic disease examination related physical examination data as sample data, labeling the sample data after preprocessing, and dividing the labeled sample data into a training set and a validation set by a five-fold cross validation method (Alawad et al. Section II. Cancer Pathology Reports: used de-identified pathology reports of breast and lung cancers, provided from five different SEER cancer registries… Cancer registry experts manually annotated all pathology reports based on standard guidelines and coding instructions used in cancer surveillance; Section IV. Performance Evaluation and Experimental Results: implemented a balanced tenfold cross validation scheme by randomly partitioning the dataset into ten parts with near balanced label distributions… For each fold Alawad et al. used one partition once for testing and combined the rest for our training set… Alawad et al. evaluated model performance by aggregating the predicted responses from each test fold); designing a data coding method for structured data in physical examination data to acquire input data of the chronic disease prediction data, the data coding method comprising a content coding strategy and a spatial coding strategy, the content coding strategy being used to unify value types of data, and the spatial coding strategy being used to unify data formats the input type (Alawad et al. Section II. 4. Pre-processing: learned word embeddings; Fig. 2. Architecture diagram of a multi-task CNN); establishing a learning-based chronic disease prediction model, performing feature extraction and classification on the coded structured data by a deep learning method, and outputting prediction results of various chronic diseases at the same time (Alawad et al. Section III. A. Convolutional Neural Networks: Convolution layer generates feature maps which are the representation of every context window over the document matrix… Max pooling layer captures the most important features by taking the max value from each feature map as the extracted feature from a particular filter; Section III. B. Coarse-to-Fine Training of CNNs: In the first stage, Alawad et al. use the MTL scheme with hard parameter sharing approach to train a MT-CNN model to learn the shared features. The network architecture consists of a shared convolutional layer, as the one presented in the previous subsection); and training the chronic prediction model by the training set, and adjusting parameters of the model according to the prediction result of the model and the coincidence degree of the label until the model converges (Alawad et al. Section III. B. Coarse-to-Fine Training of CNNs: The diagram of the multi-task CNN used in this paper is shown in Figure 2. In the second stage, the MT-CNN parameters, obtained from the first training stage, are used to initialize a CNN model for each individual task. Each CNN model has the same shared layer structure, the box shown in Figure 2, and keeps only its corresponding fully connected layer. Then, the complete set of cases for each task is used for task-wise fine training its corresponding CNN network. At the end, Alawad et al. will have three different CNN models, one for each task). Regarding dependent claim 5, Alawad et al. teaches cross-validation as follows: randomly dividing the sample data into at least five parts without repeated sampling, the number of each part of data samples being equal or close; and selecting one part as a test set at each time and the remaining parts as the training set for model training, and repeating to make different training set and validation set groups (Alawad et al. Section IV. Performance Evaluation and Experimental Results: implemented a balanced tenfold cross validation scheme by randomly partitioning the dataset into ten parts with near balanced label distributions… For each fold Alawad et al. used one partition once for testing and combined the rest for our training set… Alawad et al. evaluated model performance by aggregating the predicted responses from each test fold). Regarding dependent claim 6, Alawad et al. teaches coding text information in the physical examination record into numerical information by a label coding mode; and coding text information in the physical examination record into numerical information by a one-hot coding mode to serve as input (Alawad et al. Section II. 4. Pre-processing: learned word embeddings; Fig. 2. Architecture diagram of a multi-task CNN). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-9 are rejected under 35 U.S.C. 103 as being unpatentable over Alawad et al. as applied to claims Claims 1-3, and 5-6, under 35 U.S.C. 102(a)(1) above, and further in view of Wang et al. (IEEEAccess, 2019 (Published 2019 November 29), Vol. 7, pp. 178392-178400) and Usama et al. (IEEEAccess, 2018, Vol. 6, pp. 67927-67939). Alawad et al. teaches a chronic disease prediction system based on a multi-task learning model (Alawad et al. Sections II, III, and IV) (see above). Alawad et al. does not explicitly teach the specific pre-processing steps and training steps as recited in claims 4 and 7-9. Regarding dependent claim 4, Wang et al. teaches performing correlation analysis and missing value counting on various indexes in the physical examination data, eliminating data with missing values in a single record exceeding a certain ratio from the perspective of physical examination records, eliminating data indexes with missing values in all the records exceeding a certain ratio from the perspective of data indexes, grouping according to ages, and performing missing value filling on missing data in the physical examination records (Wang et al. Section III. A. Dataset and Preprocessing: First measurement extraction, Handling missing value, and Normalization). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the chronic disease prediction system based on the multi task learning model of Alawad et al. by incorporating the preprocessing methods of first measurement extraction, handling missing value, and normalization of Wang et al. One of ordinary skill in the art would have been motivated to update the preprocessing steps of Alawad et al. with the preprocessing steps of Wang et al. because Wang et al. teaches using these preprocessing steps with a larger dataset and Alawad et al. teaches “Future direction of this work includes mapping the proposed model to a bigger dataset…” which would require appropriately improved preprocessing steps. This modification would have had a reasonable expectation of success because both Alawad et al. and Wang et al. are directed to applying multi-task learning models to electronic health records for chronic disease prediction. Regarding dependent claim 7, Usama et al. teaches the specific process of the spatial coding strategy is as follows: analyzing a correlation between any two of all variables in a one-dimensional vector, wherein the physical examination record after content coding is the one-dimensional vector (Usama et al. TABLE 2. Records features values from structured data with Doctor's discussion); sorting in a descending order according to the sum of correlations between a certain variable and all other variables (Usama et al. TABLE 2. Records features values from structured data with Doctor's discussion); and sequentially sorting all the variables after the descending sort to form a two-dimensional vector to serve as input data of a network (Usama et al. TABLE 2. Records features values from structured data with Doctor's discussion; Section V. B. 1. Input layer of RCNN: The input layer of RCNN will receive text data as X ( x 1 , x 2 , x 3 ,… x n ), where x 1 , x 2 , x 3 ,… x n are the n number of word with dimension space R m . Therefore, the dimension space of text data will be R m * n ). Regarding dependent claim 8, Usama et al. teaches the specific process of training the chronic disease prediction model by the training set is as follows: inputting one group of training sets, and outputting a prediction result respectively through feature extraction of a shared layer with a potential correlation and feature extraction for a single chronic disease (Usama et al. FIGURE 5: Recurrent convolution neural network algorithm for disease risk assessment); comparing the output prediction result with a label corresponding to data, applying an ACC function as loss of a current model and returning to the model, and updating parameters in the model (Usama et al. Section V. D. Overall Architecture Details: Usama et al. first initiated their training process with some random values of parameters…then used SGD algorithm to train the parameters and update their values; Section VI. A. Window Size Effect: To run the algorithms, Usama et al. need to confirm first the window size for convolution. Window size can affect the performance of RCNN algorithms. Thus, we obtained the window sizes of 1, 3, 5 and 9 in the experiment and evaluate the performance measure); when reaching a set ACC threshold or a specified number of iterations, stopping updating the model and outputting a result (Usama et al. FIGURE 6. Trend of training error rate with iteration numbers on RCNN-DRAM and CNN-MDRP); and sequentially inputting the remaining training sets by the above method for training until the model converges (Usama et al. FIGURE 6. Trend of training error rate with iteration numbers on RCNN-DRAM and CNN-MDRP). Regarding dependent claim 9, Usama et al. teaches the training process further comprises: after each group of training sets are trained, inputting validation sets in the group into the model to obtain a corresponding classification result; and averaging loss values obtained by all the validation sets to serve as performance assessment of the model for finding an optimal parameter (Usama et al. FIGURE 7. Trend of test accuracy with iteration numbers on RCNN-DRAM and CNN-MDRP; Section VI. A. Window Size Effect: Figure 8 shows that with window size 5, the RCNN algorithms perform best among all window sizes with 96.02% accuracy, 9.45% precision, 98.08% recall, and 96.23% F1-measure; Section VI. B. Training Error and Test Accuracy: with increasing number of iterations, the test accuracy increases gradually and the training error decreases). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the chronic disease prediction system based on the multi task learning model of Alawad et al. by incorporating the spatial coding strategy and training steps of Usama et al. One of ordinary skill in the art would have been motivated to combine the system of Alawad et al. with Usama et al. because Usama et al. teaches improved prediction accuracy of the proposed model that incorporated the spatial coding strategy and training steps. This modification would have had a reasonable expectation of success because both Alawad et al. and Usama et al. are directed to applying multi-task learning models to electronic health records for chronic disease prediction. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to LEAH E SEXTON whose telephone number is (571)272-3057. The examiner can normally be reached Monday - Friday 8 am - 5 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Karlheinz Skowronek can be reached at 571-272-9047. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /L.E.S./Examiner, Art Unit 1687 /Karlheinz R. Skowronek/Supervisory Patent Examiner, Art Unit 1687
Read full office action

Prosecution Timeline

Dec 28, 2021
Application Filed
Sep 12, 2025
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 9609888
NUTRITIONAL COMPOSITIONS CONTAINING SYNERGISTIC COMBINATION AND USES THEREOF
2y 5m to grant Granted Apr 04, 2017
Patent 9498557
CROSSLINKING METHODS AND APPLICATIONS THEREOF
2y 5m to grant Granted Nov 22, 2016
Patent 9486003
HYPOCALORIC, HIGH PROTEIN NUTRITIONAL COMPOSITIONS AND METHODS OF USING SAME
2y 5m to grant Granted Nov 08, 2016
Patent 9322833
Ultra-Small ApoB-Containing Particles and Methods of Use Thereof
2y 5m to grant Granted Apr 26, 2016
Patent 8778889
ANTIMICROBIAL DECAPEPTIDE ORAL HYGIENE TREATMENT
2y 5m to grant Granted Jul 15, 2014
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
22%
Grant Probability
57%
With Interview (+35.3%)
4y 9m
Median Time to Grant
Low
PTA Risk
Based on 256 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month