Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
2. This Office Action is sent in response to Applicant’s Communication received on 02/25/2026 for application number 18/153,805.
Response to Amendments
3. The Amendment filed 02/25/2026 has been entered. Claims 1, 7, 10-12, and 16 have been amended. Claim 17 has been added. Claims 1-17 remain pending in the application.
4. Applicant’s amendment to the claim 5 has been fully considered and is persuasive. The objection to this claim is respectfully withdrawn.
Response to Arguments
Applicant argues that paragraphs [0085] and [0090] of Harper discloses that "at test" dropout is applied to capture uncertainty in the model predictions. At other times (non-test times), such as for regression or classification, different results are generated as is described by Harper at paragraph [0089] and with reference to FIG. 5. Thus, Harper does not disclose an approach of obtaining a confidence value and at least one regression value or classification from the same aggregated evaluation results, for which one or more model parameters are reduced by an amount or set to zero, as is required by amended claim 1. Therefore, even if Sturlaugson is modified based upon the teaching of Harper, the modification does not arrive at the invention of claim 1. Accordingly, the Office Action has not made a prima facie case of obviousness with respect to claim 1 and the rejection thereof should be withdrawn.
Examiner respectfully disagrees and notes that Harper teaches, for a single input sample, dropout is applied at test time and N stochastic forward passes are performed to generate a distribution of model outputs. Harper further teaches that this distribution is used to characterize model uncertainty/confidence, and also that a regression output can be generated from the model and a classification output can be generated by applying decision boundaries and a confidence to the same output distribution. Thus, Harper teaches both (i) a first model output corresponding to a confidence value, and (ii) a second model output corresponding to at least one regression value or a classification result. Harper also teaches that dropout is a process by which individual nodes within the network are randomly removed, and that dropout is applied at test time during the N stochastic forward passes. It is noted that such removal reasonably teaches the claimed alternative of one or more model parameters being set to 0 for a given evaluation. Further, Sturlaugson independently teaches aggregating evaluation results and discloses a confidence related result, such as a confidence interval, as part of the performance result or performance comparation statistics. Therefore, the combination remains proper. Sturlaugson teaches evaluating trained models and aggregating evaluation results, while Harper teaches using repeated evaluations with dropout suppression at test time to derive uncertainty/confidence and regression/classification outputs from the same evaluation framework. One of the ordinary skill in the art would have found it obvious to incorporate Harper’s uncertainty technique into Sturlaugson’s model evaluation approach to improve interpretability and confidence assessment of model outputs.
Claim Interpretation - 35 USC § 112(f)
5. The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
6. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a controller configured to carry out” in claim 12.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections – 35 USC § 103
7. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
8. Claims 1-2, 4, and 6-16 are rejected under 35 U.S.C. 103 as being unpatentable over Sturlaugson et al. (U.S. Patent Application Pub. No. US 20160358099 A1) in view of Harper et al. (U.S. Patent Application Pub. No. US 20210015417 A1).
Claim 1: Sturlaugson teaches a method of evaluating a trained data-based evaluation model determines a model output for controlling, regulating, operating, or monitoring a technical system with periodically determined input data sets, the method comprising:
recording input data sets (i.e. The method of paragraph A1, wherein the input dataset is at least one of a time-dependent dataset, a time-series dataset, a time-stamped dataset, a sequential dataset, and a temporal dataset; para. [0069]) for a predetermined number of time-sequential scanning steps (i.e. Data analysis problems may relate to time-dependent data, which may be called sequence data, time-series data, temporal data, and/or time-stamped data. Time-dependent data relate to the progression of an observable (also called a quantity, an attribute, a property, or a feature) in a sequence and/or through time (e.g., measured in successive periods of time); para. [0018]);
aggregating the input data sets into an input data package (i.e. Machine learning systems 10 may include data preprocessor 24, also referred to as an initial data preprocessor and a global preprocessor. Data preprocessor 24 is configured to prepare the input dataset for processing by the experiment module 30. The input to the data preprocessor 24 includes the input dataset provided by the data input module 20. Data preprocessor 24 may apply one or more preprocessing algorithms to the input dataset. For example, the data preprocessor 24 may be configured to discretize, to apply independent component analysis to, to apply principal component analysis to, to eliminate missing data from (e.g., to remove records and/or to estimate data), to select features from, and/or to extract features from the dataset; para. [0027, 0032]) of validated input data sets (i.e. Training and evaluating 106 includes using the same input dataset, as received by the receiving 102 and/or modified by the preprocessing 112, i.e., the input feature dataset, to produce a performance result for each machine learning model; para. [0050]);
determining an evaluation result for each of the input data sets in the input data package using the trained data-based evaluation model (i.e. Evaluating 124 includes evaluating each trained model with the corresponding evaluation dataset, e.g., as discussed with respect to experiment module 30. The trained model is applied to the evaluation dataset to produce a result (a prediction) for each of the input values of the evaluation dataset and the results are compared to the known output values of the evaluation dataset. The comparison may be referred to as an evaluation result and/or a performance result; para. [0056]), wherein, upon each evaluation, one or more model parameters of the trained data-based evaluation model (i.e. the selection of machine learning models 32 received by the data input module 20 may include specific machine learning algorithms and a range and/or a set of one or more associated parameters to test. The experiment module 30 may apply these range(s) and/or set(s) to identify a group of machine learning models 32. That is, the experiment module 30 may generate a machine learning model 32 for each unique combination of parameters specified by the selection; para. [0034]); and
aggregating the evaluation results to obtain model output (i.e. Training and evaluating 106 may include validation and/or cross validation (multiple rounds of validation), e.g., leave-one-out cross validation, and/or k-fold cross validation, as discussed with respect to experiment module 30. Training and evaluating 106 may include repeatedly dividing 120 the dataset to perform multiple rounds of training 122 and evaluation 124 (i.e., rounds of validation) and combining 126 the (evaluation) results of the multiple rounds of training 122 and evaluation 124 to produce the performance result for each machine learning model. Combining 126 the evaluation results to produce the performance result may be by averaging the evaluation results, accumulating the evaluation results, and/or other statistical combinations of the evaluation results; para. [0057, 0059]) corresponding to a confidence value (i.e. The performance result for each machine learning model 32 and/or the individual evaluation results for each round of validation may include an indicator, value, and/or result related to a correlation coefficient, a mean square error, a confidence interval, an accuracy; para. [0042, 0045]).
Sturlaugson does not explicitly teach one or more model parameters of the model are reduced by an amount or set to 0; and to obtain (ii) a second model output corresponding to at least one regression value or a classification result.
However, Harper teaches determining an evaluation result for each of the input data sets in the input data package using the trained data-based evaluation model (i.e. In order to capture uncertainty in the model predictions, dropout is applied at test time. For a single input sample, stochastic forward propagation is run N times to generate a distribution of model outputs; para. [0090]), wherein, upon each evaluation, one or more model parameters of the trained data-based evaluation model are reduced by an amount or set to 0 (i.e. Dropout is a process by which individual nodes within the network are randomly removed during training according to a specified probability. By implementing dropout at test and performing N stochastic forward passes through the network, a posterior distribution can be approximated over model predictions (approaching the true distribution as N→∞). In the embodiment, the Monte-Carlo dropout technique is implemented as an efficient way to describe uncertainty over emotional state predictions; para. [0085]); and aggregating the evaluation results to obtain (i) a first model output corresponding to a confidence value (i.e. By implementing dropout at test and performing N stochastic forward passes through the network, a posterior distribution can be approximated over model predictions … For a single input sample, stochastic forward propagation is run N times to generate a distribution of model outputs … the predicted emotional state is output with a confidence level by the model; para. [0085, 0090, 0098, 0100]), and (ii) a second model output corresponding to at least one regression value or a classification result (i.e. The output layer 450 then outputs the final result 451 for the input 410, dependent on whether the output layer 450 is designed for regression or classification. If the output layer 450 is designed for regression, the final result 451 is a regression output of continuous emotional valence and/or arousal. If the output layer 450 is designed for classification, the final result 451 is a classification output, i.e. a discrete emotional state; para. [0088, 0090, 0091]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Sturlaugson to include the feature of Harper. One would have been motivated to make this modification because it reduces computational complexity and improve efficiency of the evaluation process, improve interpretability and confidence assessment of model outputs.
Claim 2: Sturlaugson and Harper teach the method according to claim 1. Sturlaugson further teaches wherein the input data sets comprise one or more sensor signals (i.e. observable values may be selected, extracted, and/or processed only if within a predetermined range (e.g., outlier data may be excluded) and/or if other observable values are within a predetermined range (e.g., one sensor value may qualify the acceptance of another sensor value); para. [0032]).
Claim 4: Sturlaugson and Harper teach the method according to claim 1. Sturlaugson does not explicitly teach randomly selecting the one or more model parameters of the model that are reduced by the amount or set to 0.
However, Harper further teaches randomly selecting the one or more model parameters of the trained data-based evaluation model that are reduced by the amount or set to 0 (i.e. Dropout is a process by which individual nodes within the network are randomly removed during training according to a specified probability. By implementing dropout at test and performing N stochastic forward passes through the network, a posterior distribution can be approximated over model predictions (approaching the true distribution as N→∞). In the embodiment, the Monte-Carlo dropout technique is implemented as an efficient way to describe uncertainty over emotional state predictions; para. [0085]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Sturlaugson to include the feature of Harper. One would have been motivated to make this modification because it reduces computational complexity and improve efficiency of the evaluation process.
Claim 6: Sturlaugson and Harper teach the method according to claim 1. Sturlaugson further teaches the trained data-based evaluation model is trained based on training datasets corresponding to labelled input data sets (i.e. Evaluating 124 includes evaluating each trained model with the corresponding evaluation dataset, e.g., as discussed with respect to experiment module 30. The trained model is applied to the evaluation dataset to produce a result (a prediction) for each of the input values of the evaluation dataset and the results are compared to the known output values of the evaluation dataset. The comparison may be referred to as an evaluation result and/or a performance result; para. [0056]).
Sturlaugson does not explicitly teach randomly selected model parameters are reduced by the amount or set to 0.
However, Harper further teaches with a portion or with each iteration, randomly selected model parameters are reduced by the amount or set to 0 (i.e. Dropout is a process by which individual nodes within the network are randomly removed during training according to a specified probability. By implementing dropout at test and performing N stochastic forward passes through the network, a posterior distribution can be approximated over model predictions (approaching the true distribution as N→∞). In the embodiment, the Monte-Carlo dropout technique is implemented as an efficient way to describe uncertainty over emotional state predictions; para. [0085]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Sturlaugson to include the feature of Harper. One would have been motivated to make this modification because it reduces computational complexity and improve efficiency of the evaluation process.
Claim 7: Sturlaugson and Harper teach the method according to claim 1. Sturlaugson further teaches wherein the confidence value is used in a controller, a regulation, an operation, and/or a monitoring of the technical system (i.e. time-dependent data may relate to the operational health of equipment such as aircraft and their subsystems (e.g., propulsion system, flight control system, environmental control system, electrical system, etc.). Related observables may be measurements of the state of, the inputs to, and/or the outputs of electrical, optical, mechanical, hydraulic, fluidic, pneumatic, and/or aerodynamic components; para. [0018]).
However, Harper also further teaches wherein the confidence value is used in a controller, a regulation, an operation, and/or a monitoring of the technical system (i.e. While useful from a theoretical perspective, Equation 1 is infeasible to compute. Instead, the posterior distributions can be approximated using a Monte-Carlo dropout method (alternatively embodiments can use methods including Monte Carlo or Laplace approximation methods, or stochastic gradient Langevin diffusion, or expectation propagation or variational methods). Dropout is a process by which individual nodes within the network are randomly removed during training according to a specified probability. By implementing dropout at test and performing N stochastic forward passes through the network, a posterior distribution can be approximated over model predictions (approaching the true distribution as N→∞). In the embodiment, the Monte-Carlo dropout technique is implemented as an efficient way to describe uncertainty over emotional state predictions; para. [0085, 0090]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Sturlaugson to include the feature of Harper. One would have been motivated to make this modification because it reduces computational complexity and improve efficiency of the evaluation process.
Claim 8: Sturlaugson and Harper teach the method according to claim 1. Sturlaugson further teaches wherein: aggregating the evaluation results is performed with averaging or with a median formation (i.e. Combining 126 the evaluation results to produce the performance result may be by averaging the evaluation results, accumulating the evaluation results, and/or other statistical combinations of the evaluation results; para. [0057]).
Claim 9: Sturlaugson and Harper teach the method according to claim 1. Sturlaugson further teaches wherein the input data sets of the input data package are validated when it is determined that two time-adjacent input data sets have a clearance (i.e. wherein the input dataset includes a series of values of an observable measured in successive periods of time; para. [0070]) that is not greater than a predetermined distance threshold and/or when it is determined that two input data sets have a clearance (i.e. the statistic may be a time average of a sensor value and/or a difference between two sensor values (e.g., measured at different times and/or different locations). More generally, statistics may include, and/or may be, a minimum, a maximum, an average, a variance, a deviation, a cumulative value, a rate of change, an average rate of change, a sum, a difference, a ratio, a product, and/or a correlation. Statistics may include, and/or may be, a total number of data points, a maximum number of sequential data points, a minimum number of sequential data points, an average number of sequential data points, an aggregate time, a maximum time, a minimum time, and/or an average time that the input feature data values are above, below, or about equal to a threshold value; para. [0031]) that is not greater than a predetermined distance threshold (i.e. observable values may be selected, extracted, and/or processed only if within a predetermined range (e.g., outlier data may be excluded) and/or if other observable values are within a predetermined range (e.g., one sensor value may qualify the acceptance of another sensor value); para. [0032]).
Claim 10: Sturlaugson and Harper teach the method according to claim 1. Sturlaugson further teaches comprising: using the second model output to control and/or monitor the technical system (i.e. time-dependent data may relate to the operational health of equipment such as aircraft and their subsystems (e.g., propulsion system, flight control system, environmental control system, electrical system, etc.). Related observables may be measurements of the state of, the inputs to, and/or the outputs of electrical, optical, mechanical, hydraulic, fluidic, pneumatic, and/or aerodynamic components; para. [0018, 0056]).
Claims 11-13 are similar in scope to Claim 1 and are rejected under a similar rationale.
Claim 14: Sturlaugson and Harper teach the method according to claim 2. Sturlaugson further teaches wherein the one or more sensor signals are configured as one or more state variables (i.e. the dataset includes data for one or more observables (e.g., a voltage measurement and a temperature measurement); para. [0020]), one or more sensor signal time series, and/or image data (i.e. data analysis problems may relate to time-dependent data, which may be called sequence data, time-series data, temporal data, and/or time-stamped data. Time-dependent data relate to the progression of an observable (also called a quantity, an attribute, a property, or a feature) in a sequence and/or through time (e.g., measured in successive periods of time); para. [0018]).
Claim 15: Sturlaugson and Harper teach the method according to claim 7. Sturlaugson further teaches wherein the confidence value is indicated depending of the evaluation results (i.e. Training and evaluating 106 may include repeatedly dividing 120 the dataset to perform multiple rounds of training 122 and evaluation 124 (i.e., rounds of validation) and combining 126 the (evaluation) results of the multiple rounds of training 122 and evaluation 124 to produce the performance result for each machine learning model. Combining 126 the evaluation results to produce the performance result may be by averaging the evaluation results, accumulating the evaluation results, and/or other statistical combinations of the evaluation results; para. [0056, 0057, 0059]).
Sturlaugson does not explicitly teach a scattering, a standard deviation, or a variance of the results.
However, Harper further teaches wherein the confidence value is indicated depending on a scattering, a standard deviation, or a variance of the evaluation results (i.e. While useful from a theoretical perspective, Equation 1 is infeasible to compute. Instead, the posterior distributions can be approximated using a Monte-Carlo dropout method (alternatively embodiments can use methods including Monte Carlo or Laplace approximation methods, or stochastic gradient Langevin diffusion, or expectation propagation or variational methods). Dropout is a process by which individual nodes within the network are randomly removed during training according to a specified probability. By implementing dropout at test and performing N stochastic forward passes through the network, a posterior distribution can be approximated over model predictions (approaching the true distribution as N→∞). In the embodiment, the Monte-Carlo dropout technique is implemented as an efficient way to describe uncertainty over emotional state predictions; para. [0085, 0090]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Sturlaugson to include the feature of Harper. One would have been motivated to make this modification because it reduces computational complexity and improve efficiency of the evaluation process.
Claim 16: Sturlaugson and Harper teach the method according to claim 8. Sturlaugson further teaches wherein aggregating the evaluation results is performed with classification vectors as the evaluation results (i.e. Machine learning may be applied to regression problems (where the output data are numeric, e.g., a voltage, a pressure, a number of cycles) and to classification problems (where the output data are labels, classes, and/or categories, e.g., pass-fail, failure type, etc.); para. [0003, 0023, 0024, 0042]) and a class is output as the second model output that results from a majority decision (i.e. Macro-procedures 36 may include a machine learning algorithm and associated parameter values that are independent and/or distinct from the micro-procedures 38. Additionally or alternatively, macro-procedures 36 may combine the outcomes of the ensemble of micro-procedures 38 by cumulative value, maximum value, minimum value, median value, average value, mode value, most common value, and/or majority vote; para. [0026, 0079, 0125]).
9. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Sturlaugson in view of Harper, and further in view of Henry (U.S. Patent Application Pub. No. US 20190065948 A1).
.
Claim 3: Sturlaugson and Harper teach the method according to claim 1. Sturlaugson further teaches wherein: the trained data-based evaluation model comprises an artificial neural network having one or more layers of artificial neurons, and the one or more model parameters, for each of the neurons (i.e. an artificial neural network may include parameters specifying the number of nodes, the cost function, the learning rate, the learning rate decay, and the maximum iterations; para. [0022, 0076]).
Sturlaugson does not explicitly teach a weighting vector and a bias value
However, Henry teaches wherein: the trained data-based evaluation model comprises an artificial neural network having one or more layers of artificial neurons, and the one or more model parameters, for each of the neurons, comprise weights of a weighting vector and a bias value (i.e. selection score components 106A-C may compute a score using a neural network, such as a multi-layer perceptron (MLP) with a single hidden layer. An MLP may be specified with weight matrices W1 and W2, bias vectors b1 and b2, and non-linear function a, such as a rectified linear function or a hyperbolic tangent function. Where the feature vector is denoted by x, a score s may be computed; para. [0033, 0034]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Sturlaugson and Harper to include the feature of Henry. One would have been motivated to make this modification because weights and biases are the standard parameters of artificial neurons in neural networks.
10. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Sturlaugson in view of Harper, and further in view of Hannun et al. (U.S. Patent Application Pub. No. US 20160171974 A1).
Claim 5: Sturlaugson and Harper teach the method according to claim 4. Sturlaugson does not explicitly teach wherein the one or more model parameters that are reduced by the amount or set to 0 corresponds to between 1% and 20% of a total number of the one or more model parameters.
However, Hannun teaches wherein the one or more model parameters that are reduced by the amount or set to 0 corresponds to between 1% and 20% of a total number of the one or more model parameters (i.e. during training, a dropout rate (e.g., 5%) was applied; para. [0050]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Sturlaugson and Harper to include the feature of Henry. One would have been motivated to make this modification because it provides balance computational efficiency with model accuracy.
11. Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Sturlaugson in view of Harper, and further in view of Toledano (U.S. Patent Application Pub. No. US 20200233774 A1).
Claim 17: Sturlaugson and Harper teach the method according to claim 9. Sturlaugson does not explicitly teach discarding the input data sets of the input data package that cannot be validated.
However, Toledano teaches discarding the input data sets of the input data package that cannot be validated (i.e. The outlier removal module 310 filters out extreme irregularities from the received time-series data. For example, the outlier removal module 310 may remove data points associated with a malfunctioning sensor. The outlier removal module 310 may remove data points that are greater or less than an immediately preceding data point by a predetermined value. As an example, the outlier removal module 310 may filter out a data point that is at least five times greater than or five time smaller than a preceding value. In other implementations, the predetermined value by be another suitable value that coarsely filters the time-series data to remove extreme irregularities. Interpolation may be used to replace the removed irregularities. The outlier removal module 310 provides the filtered time-series data to a seasonal trend identification module 312 and a modeling module 314; para. [0012, 0017, 0059, 0065]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the combination of Sturlaugson and Harper to include the feature of Toledano. One would have been motivated to make this modification because it ensures that only sufficiently consistent sequential input data are used for model evaluation.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
Hsiung et al. (Pub. No. US 12590889 B2), receive information identifying results of a spectroscopic measurement performed on an unknown sample; aggregate a plurality of classes of a classification model to generate an aggregated classification model; determine that the spectroscopic measurement is performed accurately using the aggregated classification model; determine a confidence measure for a set of classes of the aggregated classification model; select a subset of the set of classes based on the confidence measure for the set of classes; generate an in situ local classification model using the subset of the set of classes; identify one or more outlier samples in the in situ local classification model; remove the one or more outlier samples from the in situ local classification model; generate a prediction using the in situ local classification model based on removing the one or more outlier samples; and provide an output identifying the prediction.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33, 216 U.S.P.Q. 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 U.S.P.Q. 275, 277 (C.C.P.A. 1968)).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TAN TRAN whose telephone number is (303)297-4266. The examiner can normally be reached on Monday - Thursday - 8:00 am - 5:00 pm MT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matt Ell can be reached on 571-270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TAN H TRAN/Primary Examiner, Art Unit 2141