DETAILED ACTION
This Action is responsive to Claims filed 11/20/2025.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of the Claims
Claims 1, 7-8, 11-12, and 18 have been amended. Claims 1-20 are pending.
Response to Arguments
The amendments to Claim 7 have overcome the Objections to informalities.
The amendments to Claim 1 have addressed part of the informality of Claim 1, but grammatical issues are still present. See Claim Objections below.
Applicant’s arguments, see Page 9, filed 11/20/2025, with respect to Claims 12-20 have been fully considered and are persuasive. The 35 U.S.C. 112(b) Rejection of Claims 12-20 has been withdrawn.
Applicant’s arguments, see Pages 9-10, filed 11/20/2025, with respect to Claims 8 and 18 have been fully considered and are persuasive. The 35 U.S.C. 112(b) Rejection of Claims 8 and 18 has been withdrawn.
Applicant's arguments, see Pages 10-12, filed 11/20/2025, regarding the 35 U.S.C. 101 Rejection of Claims 1-20 have been fully considered but they are not persuasive.
As presently drafted, the amended independent claims do not recite specific technical structure or implementation precluding the claimed steps from being practically performed within the human mind or with the aid of pen and paper. The newly amended claim limitations regarding the data type (“transformed…from sensing data, and the sensing data is time-sensitive data”) does not recite specific structure or implementation precluding a human mind from transforming time-dependent sensing data into feature sets. The recitation of a machine learning model does not directly indicate computer implementation (a generic model may be a set of equations, for example); therefore, the Examiner contends the “determining…” and “selecting…” steps remain generally recited and interpretable as abstract idea mental process steps practically performed within the human mind or with the aid of pen and paper, with the “predicting…” step amounting to instructions to apply the model previously selected.
An improvement to the functioning of a computer in this context, the Examiner contends, comes from an appropriate selection of a model in the “selecting…” step, said decision made based on the “determining…” step. Per MPEP 2106.05(a), the specific improvement to the functioning of a computer or other technological field cannot come from the abstract idea(s), but should rather originate from an additional element. See the updated 35 U.S.C 101 Rejection below.
Applicant's arguments, see Pages 12-15, filed 11/20/2025, regarding the 35 U.S.C. 103 Rejection(s) of Claims 1-20 have been fully considered but they are not persuasive.
The Examiner reiterates Sturlaugson paragraphs [0003]-[0005] indicate a need to test multiple machine learning models for varying specific problems, which the Examiner contends studying sleep and/or sleep patterns would qualify in terms of specificity. A person of ordinary skill in the art at the time of the Applicant’s filing would have reasonably incorporated known methods taught in Sturlaugson to realize best options when conducting such a specific use case as sleep data prediction. See the updated 35 U.S.C. 103 Rejection(s) below.
Claim Objections
Claim 1 objected to because of the following informalities:
“trained on using different data groups” is not grammatically correct. Omitting “on” or “using” (Similar to Claim 11) would address the issue.
Appropriate correction is required.
Claim Rejections - 35 USC § 101
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 1-20 rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more; and because the claims as a whole, considering all claim elements both individually and in combination, do not amount to significantly more than the abstract idea, see Alice Corporation Pty. Ltd. v. CLS Bank International, et al, 573 U.S. (2014). In determining whether the claims are subject matter eligible, the Examiner applies the 2019 USPTO Patent Eligibility Guidelines. (2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50, Jan. 7, 2019.)
Step 1:
Claims 1-10 recite a data predicting method, which falls under the statutory category of a process. Claims 11-20 recite a data predicting apparatus, which falls under the statutory category of a machine.
Step 2A – Prong 1:
Claim 1 recites an abstract idea, law of nature, or natural phenomenon. The limitations “determining a plurality of distances between predicting data and a plurality of data groups;” and “selecting a first machine learning model corresponding to one of the data groups having a shortest distance with the predicting data from a plurality of machine learning models;” under the broadest reasonable interpretation, cover a mental process including an observation, evaluation, judgment or opinion that could be performed in the human mind or with the aid of pencil and paper. These limitations therefore fall within the mental process group.
Determining generic distances between data sets is practically performed within the human mind or with the aid of pen and paper. Selecting a machine learning model is practically performed within the human mind or with the aid of pen and paper.
Step 2A – Prong 2:
The additional elements of claim 1 do not integrate the abstract idea into a judicial exception. The claim recites the additional elements “a plurality of data groups” are recognized as generic computer components recited at a high level of generality (the Specification does not indicate these elements are different from a typical processing unit). Although it has and executes instructions to perform the abstract idea itself, this also does not serve to integrate the abstract idea into a practical application as it merely amounts to instructions to "apply it." (See MPEP 2106.04(d)(2) indicating mere instructions to apply an abstract idea does not amount to integrating the abstract idea into a practical application).
The additional elements recited in the limitations “A data predicting method”, “a first machine learning model”, and “a prediction result” are recognized as non-generic computer components, however, they are found to generally link the abstract idea to a particular technological field (See MPEP 2106.05(h)).
The additional elements “predicting a prediction result corresponding to the predicting data through the first machine learning model, wherein the machine learning models are respectively trained basing on different data groups” are found to be mere instructions to apply the abstract idea steps of determining and selecting (See MPEP 2106.05(f)).
Step 2B:
The only limitation on the performance of the described method is a limitation reciting “a plurality of data groups” These elements are insufficient to transform a judicial exception to a patentable invention because the recited elements are considered insignificant extra-solution activity (generic computer system, processing resources, links the judicial exception to a particular, respective, technological environment). The claim thus recites computing components only at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components; mere instructions to apply an exception using a generic computer component cannot provide an inventive concept (see MPEP 2106.05(f)).
The additional elements recited in the limitations “A data predicting method”, “a first machine learning model”, and “a prediction result” are recognized as non-generic computer components, however, they are found to generally link the abstract idea to a particular technological field (See MPEP 2106.05(h)).
The additional elements “predicting a prediction result corresponding to the predicting data through the first machine learning model, wherein the machine learning models are respectively trained basing on different data groups” are found to be mere instructions to apply the abstract idea steps of determining and selecting (See MPEP 2106.05(f)).
Taken alone or in ordered combination, these additional elements do not amount to significantly more than the above-identified abstract idea. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation.
For the reasons above, claim 1 is rejected as being directed to non-patentable subject matter under §101. This rejection applies equally to independent claim 11.
Claim 11 recites similar limitations to claim 1, with the exception of “A data predicting apparatus, comprising: a memory, storing program code; and a processor, loading the program code for executing:” (generic computer components); therefore, both claims are similarly rejected.
Dependent Claims:
Claim 2 (claim 12) recites abstract idea mental process steps “executing a dimensionality reduction analysis on a plurality of feature sets to obtain an analysis result, wherein each of the feature sets comprises a plurality of features; normalizing the feature sets according to the analysis result to generate a plurality of normalized feature sets; generating a distance relationship of the normalized feature sets, wherein the distance relationship comprises a distance between two of the normalized feature sets; clustering the feature sets according to the distance relationship to generate the data groups, wherein each of the data groups comprises the feature set;” and instructions to apply “training the machine learning models through the data groups.”
Claim 3 (claim 13) recites refinements to abstract idea mental process steps of claim 2 and abstract ide amental process step “selecting a first principal component from the principal components, and normalizing the feature sets according to the first principal component.”
Claim 4 (claim 14) recites refinements to the data types of claims 2 and 3.
Claim 5 (claim 15) recites refinements to the data types of claims 2 and 3.
Claim 6 (claim 16) recites refinements to the data types of claims 2 and 3.
Claim 7 (claim 17) recites abstract idea mental process step “clustering the feature sets with the smallest distance relationship into one of the data groups according to the distance relationship through a hierarchical clustering.”
Claim 8 (claim 18) recites abstract idea mental process steps “determining a default number of the data groups; determining a cluster distance according to the default number; and clustering the feature sets according to the cluster distance.”
Claim 9 (claim 19) recites pre- or post-solution/WURC activity “transforming a plurality of sensing data into the feature sets, wherein the sensing data is time- dependent data;” (See MPEP 2106.05(g) and MPEP 2106.05(d)(II)) and instructions to apply the abstract idea “training a corresponding machine learning model basing on the feature sets or the sensing data corresponding to each of the data groups.”
Claim 10 (claim 20) recites additional elements generally linking the abstract idea to a specific data type or field of use.
Claim Rejections - 35 USC § 103
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1-3, 9-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ma et al. (Combined unsupervised‑supervised machine learning for phenotyping complex diseases with its application to obstructive sleep apnea, 2021), hereinafter Ma and Sturlaugson et al. (US 2016/0358099 A1), hereinafter Sturlaugson.
In regards to Claim 1: The present invention claims: “A data predicting method, the data predicting method comprising: determining a plurality of distances between predicting data and a plurality of data groups, wherein the predicting data comprises a plurality of feature sets transformed from a plurality of sensing data, and the sensing data is time-dependent data;” Ma teaches “a multimetric phenotyping framework by combining supervised and unsupervised machine learning” (Abstract) that “calculates the cluster assignment probabilities for new patients based on their 43 PSG features (left) by using the trained DPGMM model…” (Page 9, Figure 6). See Ma Table 1 (Page 4) for a list of the extracted features from sensed sleep data, including numerous time-dependent features.
“selecting a first machine learning model corresponding to one of the data groups having a shortest distance with the predicting data…” Ma teaches a random search forest (RSF) and “To overcome this limitation of clustering analysis, we additionally performed prediction analysis which utilizes labels in the training process and thus provides the relationship between the PSG data and comorbidity outcomes. Specifically, we performed survival prediction analysis on the full patient cohort by using the RSF: 43 PSG features (Table 1) were used as the input and the cardio-neuro-metabolic comorbidity outcomes were used as the label” (Page 5).
“and predicting a prediction result corresponding to the predicting data through the first machine learning model, wherein the prediction result comprises a sleep event…” Ma teaches “To overcome this limitation of clustering analysis, we additionally performed prediction analysis which utilizes labels in the training process and thus provides the relationship between the PSG data and comorbidity outcomes. Specifically, we performed survival prediction analysis on the full patient cohort by using the RSF: 43 PSG features (Table 1) were used as the input and the cardio-neuro-metabolic comorbidity outcomes were used as the label. The RSF provides the importance of each feature (Fig. 4) in predicting comorbidity risks (fivefold cross-validation concordance index = 0.65, integrated Brier score = 0.13), where features with greater importance can be considered more relevant to the comorbidity outcomes for our patient cohort. Among 43 PSG features, 18 features accounted for 95% of the total importance in predicting comorbidity outcomes (Fig. 4 and Supplementary Table S8). They included features regarding demographic and anthropometric characteristics (age, waist-hip ratio), sleep architecture and quality (the proportion of N3 sleep, REM latency, the Pittsburgh sleep quality index), oxygen desaturation (sleep time spent below 90% oxygen saturation, average oxygen saturation, oxygen desaturation event index, lowest oxygen saturation), respiratory events (supine AHI, hypopnea index, lateral AHI, mixed apnea, REM AHI, AHI, NREM AHI, central apnea), and snoring (number of snoring episodes).” (Page 5).
Ma fails to explicitly teach “…from a plurality of machine learning models;” and “wherein the machine learning models are respectively trained using on different data groups.” However, Sturlaugson, in a similar field of endeavor of machine learning classification, teaches evaluating multiple machine learning models in “The same training dataset and evaluation dataset may be used for one or more, optionally all, of the machine learning models 32. Additionally or alternatively, each machine learning model 32 may be tested ( optionally exclusively) with an independent division of the dataset (which may or may not be a unique division for each machine learning model). The experiment module 30 may be configured to train the machine learning model(s) 32 with the respective training dataset(s) (to produce a trained model) and to evaluate the machine learning model(s) 32 with the respective evaluation dataset(s).” ([0036]).
Sturlaugson highlights the difficulty in choosing an optimal machine learning model for a given dataset or output ([0004]). A cursory search also indicates using multiple machine learning models for different datasets or datatypes would have been known in the art at the time of Ma’s writing. It would have been obvious to one of ordinary skill in the art at the time of the Applicant’s filing to combine the known methods similar to Sturlaugson’s in implementing multiple models for each data cluster present in Ma.
In regards to claim 2: The present invention claims: “executing a dimensionality reduction analysis on a plurality of feature sets to obtain an analysis result, wherein each of the feature sets comprises a plurality of features;” Ma performs dimensionality reduction (Page 12). Sturlaugson also uses dimensionality reduction ([0030]).
“normalizing the feature sets according to the analysis result to generate a plurality of normalized feature sets;” While neither Ma nor Sturlaugson teach normalization explicitly, Sturlaugson does teach “Machine learning systems 10 may include data preprocessor 24, also referred to as an initial data preprocessor and a global preprocessor. Data preprocessor 24 is configured to prepare the input dataset for processing by the experiment module 30.” The Examiner interprets this broadly given the pervasiveness of normalization in machine learning data processing. It would have been obvious to one of ordinary skill in the art at the time of the Applicant’s filing to apply some kind of normalization in data preparation.
“generating a distance relationship of the normalized feature sets, wherein the distance relationship comprises a distance between two of the normalized feature sets; clustering the feature sets according to the distance relationship to generate the data groups, wherein each of the data groups comprises the feature set; and respectively training the machine learning models through the data groups.” Ma teaches “The DPGMM was used to cluster the patients, where each cluster was identified as a distinct phenotype. The DPGMM is a Bayesian nonparametric clustering model that is an extension of the Gaussian mixture model using the Dirichlet process prior on the mixing proportions. While clustering methods previously used for PSG-based phenotyping such as K-Means clustering require the number of clusters to be set in advance, the DPGMM infers the number of clusters that best fits the training dataset within a Bayesian statistical framework.” (Page 12, mapping the clustering of data to necessitate a distance relation between data points or feature sets). See above how Sturlaugson teaches training the models on individual parts (clusters, in the case of a combination of Ma and Sturlaugson) of a dataset.
In regards to claim 3: The present invention claims: “wherein the dimensionality reduction analysis is principal components analysis (PCA) or principal co-ordinates analysis (PCoA), the analysis result comprises proportions of a plurality of principal components, and normalizing the feature sets according to the analysis result comprises: selecting a first principal component from the principal components, and normalizing the feature sets according to the first principal component.” Ma teaches “For this, we used principal component analysis, which is a dimension reduction technique that linearly transforms a number of possibly correlated features into a small number of uncorrelated variables called principal components.” (Page 12). See above how normalizing would have been an obvious step in a combination of Ma and Sturlaugson. A cursory search also indicates PCA and normalization should be used in conjunction.
In regards to claim 9: The present invention claims: “transforming a plurality of sensing data into the feature sets, wherein the sensing data is time-dependent data; and training a corresponding machine learning model basing on the feature sets or the sensing data corresponding to each of the data groups.” Sturlaugson teaches “Data analysis problems may be classification problems or regression problems. Data analysis problems may relate to time-dependent data, which may be called sequence data, time-series data, temporal data, and/or time-stamped data. Time-dependent data relate to the progression of an observable (also called a quantity, an attribute, a property, or a feature) in a sequence and/or through time (e.g., measured in successive periods of time).” ([0018]). See subsequent paragraphs and the above rejection of claim 1 for the models being trained on the input data.
In regards to claim 10: The present invention claims: “wherein each of the sensing data is a sensing result of a radar.” Sturlaugson teaches “For example, time-dependent data may relate to the operational health of equipment such as aircraft and their subsystems (e.g., propulsion system, flight control system, environmental control system, electrical system, etc.). Related observables may be measurements of the state of, the inputs to, and/or the outputs of electrical, optical, mechanical, hydraulic, fluidic, pneumatic, and/or aerodynamic components.” (The Examiner contends this disclosure may reasonably include radar output in an aircraft, far example).
In regards to Claims 11-13: Claim 11-13 recite similar limitations to Claims 1-3, with the exception of “A data predicting apparatus, comprising: a memory, storing program code; and a processor, loading the program code for executing:” of Claim 11; therefore, both sets of claims are similarly rejected.
Claim(s) 4-5 and 14-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ma and Sturlaugson as applied to claim 1, 2, 11, and 12 above, and further in view of Abdi et al. (Principal Component Analysis, 2010), hereinafter Abdi.
While Ma uses PCA, resulting in “Overall, eight principal components explaining up to 70% of the total data variance were used as the input features for the cluster analysis.” (Page 12). The combination of Ma and Sturlaugson fails to explicitly teach the limitations of claims 4-5 (and 14-15):
Claim 4: “wherein the first principal component is a principal component with highest proportion among the principal components.”
Claim 5: “wherein the first principal component is the principal component with the highest proportion or a principal component with second highest proportion among the principal components, a difference between the principal component with the highest proportion and the principal component with the second highest proportion is less than a threshold value.”
However, Abdi, in describing PCA, teaches methods of determining a number of components in Sections 5.3 and 5.3.1. The description of scree or elbow test reasonably reads on a generic recitation of a “highest proportion” among principal components (Claim 4). The description of the Q and W values of Section 5.3.1 reasonably reads on a generic recitation of “a threshold value” when determining when to add more components (Claim 5).
Ma utilizes PCA in their disclosure. It would have been obvious to one of ordinary skill in the art at the time of the Applicant’s filing, and at the time of Ma’s writing, to use known methods from Abdi in the use of PCA in Ma’s implementation.
In regards to claims 14 and 15: Claims 14 and 15 recite similar limitations to Claims 4 and 5, with the exception of “A data predicting apparatus, comprising: a memory, storing program code; and a processor, loading the program code for executing:” of Claim 11 (presumably, see 112(b) Rejection above); therefore, both sets of claims are similarly rejected.
Claim(s) 6-8 and 16-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ma and Sturlaugson as applied to claim 1, 2, 11, and 12 above, and further in view of Lu, Xin (Information Mandala: Statistical Distance Matrix With Clustering, 2021), hereinafter Lu.
While a combination of Ma and Sturlaugson reads on a dimensionality reduction of normalized data using PCA, they fail to explicitly teach the limitations of claims 6-7 (and 16-17):
Claim 6: “wherein the distance relationship is a distance matrix, and each element in the distance matrix is a distance between the features in two of the normalized feature sets.”
Claim 7: “clustering the feature sets with the smallest distance relationship into one of the data groups according to the distance relationship through a hierarchical clustering.”
However, Lu, in a similar field of endeavor of data clustering for machine learning, teaches “In machine learning, observation features are measured in a metric space to obtain their distance function for optimization. Given similar features that are statistically sufficient as a population, a statistical distance between two probability distributions can be calculated for more precise learning. Provided the observed features are multi-valued, the statistical distance function is still efficient. However, due to its scalar output, it cannot be applied to represent detailed distances between feature elements. To resolve this problem, this paper extends the traditional statistical distance to a matrix form, called a statistical distance matrix. (Claim 6) The proposed approach performs well in object recognition tasks and clearly and intuitively represents the dissimilarities between cat and dog images in the CIFAR dataset, even when directly calculated using the image pixels. By using the hierarchical clustering of the statistical distance matrix, (Claim 7) the image pixels can be separated into several clusters that are geometrically arranged around a center like a Mandala pattern. The statistical distance matrix with clustering is called the Information Mandala.” (Abstract).
Lu highlights the shortcomings of calculating a distance function on scalar data such as the comparison of two distributions (Abstract). It would have been obvious to one of ordinary skill in the art at the time of the Applicant’s filing to use methods known from Lu in comparing feature space distances between clusters to improve the precision of learning.
In regards to claim 8: The present invention claims: “determining a default number of the data groups; determining a cluster distance according to the default number; and clustering the feature sets according to the cluster distance.” Ma teaches “The clustering algorithms used in previous OSA phenotyping studies15–18 required the number of clusters to be manually and potentially subjectively determined. On the other hand, we used DPGMM to cluster OSA patients such that the number of clusters can be inferred from the observed data instead of predetermining it. However, the number of clusters learned from data may change depending on the concentration parameter (a larger concentration parameter more likely yields a higher number of clusters). Although the clustering results were robust to changes in the concentration parameter in our study (Supplementary Table S3), there may be situations where the clustering results may not be as robust. In such cases, the concentration parameter may also be inferred from data by placing a hyperprior on the concentration parameter41.” (Page 9, Ma’s algorithm determines cluster count based on concentration parameter, which affects cluster distance (and therefore counts), and clusters data around said concentration parameter).
In regards to claims 16-20: Claims 16-20 recite similar limitations to Claims 6-10, with the exception of “A data predicting apparatus, comprising: a memory, storing program code; and a processor, loading the program code for executing:” of Claim 11; therefore, both sets of claims are similarly rejected.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GRIFFIN T BEAN whose telephone number is (703)756-1473. The examiner can normally be reached M - F 7:30 - 4:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li Zhen can be reached at (571) 272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GRIFFIN TANNER BEAN/Examiner, Art Unit 2121
/Li B. Zhen/Supervisory Patent Examiner, Art Unit 2121