DETAILED ACTION
This action is in response the communications filed on 12/05/2025 in which claims 1, 10 and 19 are amended, claims 2-3, 8, 11-12 and 17 have been canceled and therefore claims 1, 4-7, 9-10, 13-16 and 18-19 are pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/05/2025 has been entered.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
-
Claims 1, 4-7, 9-10, 13-16 and 18-19 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more
Step 1: Claims 1, 4-7 and 9 recite a device comprising a memory and a processor, claims 10, 13-16 and 18 recite a method, and claim 19 recites a non-transitory medium. Therefore, claims 1, 4-7 and 9 are directed to a machine, claims 10, 13-16 and 18 are directed to a process, and claim 19 is directed to a manufacture.
With respect to claims 1, 10 and 19:
2A Prong 1: the claim recites a judicial exception.
apply a first weight to first prediction information output from the first learning network model, a second weight to second prediction information output from the second learning network model and a third weight to the third prediction information output from the third learning network model (mental process – evaluation and judgement, apply a first/second/third weight to first/second/third output)
identify whether the home appliance has the potential defect based on the weighted first prediction information of the first prediction information to which the first weight is applied, the weighted second prediction information of the second prediction information to which the second weight is applied, and weighted third prediction information of the third prediction information to which the third weight is applied (mental process – evaluation and judgement, identify if the appliance has the defect based on the first/second/third information)
update the first learning network model and the second learning network model based on an accumulated amount of integrated learning data and a period of a processing step (mental process – evaluation and judgement, updating the first model and the second model based on an amount of data and a period of a step)
change the first weight applied to the first learning network model, the second weight applied to the second learning network model, and the third weight applied to the third learning network model, based on the accumulated amount of the integrated learning data and the period of the processing step (mental process – evaluation and judgement, change the first/second/third weight based on the amount of the data and the period of the step)
2A Prong 2: This judicial exception is not integrated into a practical application.
(claims 1 and 19) a communicator, a memory, one instruction, a processor, instructions (mere instructions to apply an exception - see MPEP 2106.05(f), (2) Whether the claim invokes computers)
provide measurement information of a home appliance as input to a first learning network model, a second learning network model and a third learning network model trained to predict whether the home appliance will exhibit a potential defect (insignificant extra-solution activity - see MPEP 2106.05(g), mere data gathering)
provide a visual notice or an auditory notice comprising information on which component of the home appliance is predicted to have the potential defect (insignificant extra-solution activity - see MPEP 2106.05(g), data output)
wherein the first learning network model is a supervised learning network model, the second learning network model is an unsupervised learning network model and the third learning network model is one of a reinforcement learning network model or a transfer learning network model (generally linking a particular technological environment or field of use – MPEP 2106.05(h), or mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception)
wherein the first learning network model and the second learning network model are trained to predict whether the home appliance has the potential defect based on the integrated learning data (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception)
wherein the integrated learning data is data acquired by integrating process defect data acquired during a manufacturing step of the home appliance and service defect data acquired in a service step of servicing the home appliance (insignificant extra-solution activity - see MPEP 2106.05(g), mere data gathering, and selecting a particular data source or type of data to be manipulated does not limit the scope of the data gathering activity)
2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
(claims 1 and 19) a communicator, a memory, one instruction, a processor, instructions (mere instructions to apply an exception - see MPEP 2106.05(f), (2) Whether the claim invokes computers)
provide measurement information of a home appliance as input to a first learning network model, a second learning network model and a third learning network model trained to predict whether the home appliance will exhibit a potential defect (insignificant extra-solution activity - see MPEP 2106.05(g), mere data gathering, and WURC: receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 - MPEP 2106.05(d)(II)(i))
provide a visual notice or an auditory notice comprising information on which component of the home appliance is predicted to have the potential defect (insignificant extra-solution activity - see MPEP 2106.05(g), data output, and WURC: receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 - MPEP 2106.05(d)(II)(i))
wherein the first learning network model is a supervised learning network model, the second learning network model is an unsupervised learning network model and the third learning network model is one of a reinforcement learning network model or a transfer learning network model (generally linking a particular technological environment or field of use – MPEP 2106.05(h), or mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception)
wherein the first learning network model and the second learning network model are trained to predict whether the home appliance has the potential defect based on the integrated learning data (mere instructions to apply an exception – MPEP 2106.05(f), (3) The particularity or generality of the application of the judicial exception)
wherein the integrated learning data is data acquired by integrating process defect data acquired during a manufacturing step of the home appliance and service defect data acquired in a service step of servicing the home appliance (insignificant extra-solution activity - see MPEP 2106.05(g), mere data gathering, and selecting a particular data source or type of data to be manipulated does not limit the scope of the data gathering activity, , and WURC: receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 - MPEP 2106.05(d)(II)(i))
With respect to claims 4 and 13:
2A Prong 1: the claim recites a judicial exception.
based on the accumulated amount of the integrated learning data being less than a threshold amount, increase the second weight relative to the first weight (mental process – evaluation and judgement, based on the data being less than a threshold, increase the second weight)
based on the accumulated amount of the integrated learning data being greater than or equal to the threshold amount, increase the first weight relative to the second weight (mental process – evaluation and judgement, based on the data being greater than a threshold, increase the first weight)
With respect to claims 5 and 14:
2A Prong 2: This judicial exception is not integrated into a practical application.
wherein the service defect data comprises at least one of information on a component identified as defective among a plurality of components constituting the home appliance, a service period of the home appliance, a production date of the home appliance, or a production area of the home appliance (generally linking the use of a judicial exception to a particular technological environment or field of use - see MPEP 2106.05(h); insignificant extra-solution activity - see MPEP 2106.05(g), selecting a particular data source or type of data to be manipulated does not limit the scope of the data gathering activity)
wherein the process defect data comprises at least one of measurement information of the home appliance or information on the component identified as defective among the plurality of components constituting the home appliance (generally linking the use of a judicial exception to a particular technological environment or field of use - see MPEP 2106.05(h); insignificant extra-solution activity - see MPEP 2106.05(g), selecting a particular data source or type of data to be manipulated does not limit the scope of the data gathering activity)
2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
wherein the service defect data comprises at least one of information on a component identified as defective among a plurality of components constituting the home appliance, a service period of the home appliance, a production date of the home appliance, or a production area of the home appliance (generally linking the use of a judicial exception to a particular technological environment or field of use - see MPEP 2106.05(h); insignificant extra-solution activity - see MPEP 2106.05(g), selecting a particular data source or type of data to be manipulated does not limit the scope of the data gathering activity. Also, WURC: receiving or transmitting data over a network– see MPEP 2106.05(d)(II)(i))
wherein the process defect data comprises at least one of measurement information of the home appliance or information on the component identified as defective among the plurality of components constituting the home appliance (generally linking the use of a judicial exception to a particular technological environment or field of use - see MPEP 2106.05(h); insignificant extra-solution activity - see MPEP 2106.05(g), selecting a particular data source or type of data to be manipulated does not limit the scope of the data gathering activity. Also, WURC: receiving or transmitting data over a network– see MPEP 2106.05(d)(II)(i))
With respect to claims 6 and 15:
2A Prong 2: This judicial exception is not integrated into a practical application.
wherein the measurement information of the home appliance includes a plurality of measurement information of different categories (generally linking the use of a judicial exception to a particular technological environment or field of use - see MPEP 2106.05(h); insignificant extra-solution activity - see MPEP 2106.05(g), selecting a particular data source or type of data to be manipulated does not limit the scope of the data gathering activity)
2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
wherein the measurement information of the home appliance includes a plurality of measurement information of different categories (generally linking the use of a judicial exception to a particular technological environment or field of use - see MPEP 2106.05(h); insignificant extra-solution activity - see MPEP 2106.05(g), selecting a particular data source or type of data to be manipulated does not limit the scope of the data gathering activity. Also, WURC: receiving or transmitting data over a network– see MPEP 2106.05(d)(II)(i))
With respect to claims 7 and 16:
2A Prong 1: the claim recites a judicial exception.
cluster a plurality of learning data used for training the first learning network model and the second learning network model (mental process – evaluation and judgement, cluster data)
divide the plurality of learning data as groups of learning data for the respective different categories (mental process – evaluation and judgement, divide the data as groups)
2A Prong 2: This judicial exception is not integrated into a practical application.
train the first learning network model and the second learning network model based on the plurality of learning data for the groups of learning data (mere instructions to apply an exception - see MPEP 2106.05(f), (3) The generality of the application of the judicial exception, a general training process)
2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
train the first learning network model and the second learning network model based on the plurality of learning data for the groups of learning data (mere instructions to apply an exception - see MPEP 2106.05(f), (3) The generality of the application of the judicial exception, a general training process)
With respect to claims 9 and 18:
2A Prong 2: This judicial exception is not integrated into a practical application.
acquire the first weight and the second weight based on a Bayesian Optimization algorithm (generally linking the use of a judicial exception to a particular technological environment or field of use - see MPEP 2106.05(h))
2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
acquire the first weight and the second weight based on a Bayesian Optimization algorithm (generally linking the use of a judicial exception to a particular technological environment or field of use - see MPEP 2106.05(h))
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-obviousness.
Claims 1, 4-7, 9-10, 13-16 and 18-19 rejected under 35 U.S.C. 103 as being unpatentable over Kang ("Product failure prediction with missing data") in view of Feurer ("Scalable Meta-Learning for Bayesian Optimization") in view of Springenberg ("Bayesian Optimization with Robust Bayesian Neural Networks") in view of Wistuba ("Scalable Gaussian process-based transfer surrogates for hyperparameter optimization" 20171222) in view of Park (KR 20190044198 A)
PNG
media_image1.png
50
346
media_image1.png
Greyscale
In regard to claims 1, 10 and 19, Kang teaches: provide measurement information of a home appliance as input to a first learning network model, a second learning network model and a third learning network model trained to predict whether the home appliance will exhibit a potential defect, (Kang, p. 2 "We demonstrate the effectiveness of the proposed method through a case study on product failure prediction using actual data-sets from a home appliance manufacturer."; p. 4 "Suppose that an incomplete training data-set [measurement information] D = {(xi , yi)} i=1..n consisting of n instances… A prediction model Mt is trained [a first/second/third learning network model trained] on Dt using a pre-determined learning algorithm.")
apply a first weight to first prediction information output from the first learning network model, a second weight to second prediction information output from the second learning network model and a third weight to the third prediction information output from the third learning network model,
PNG
media_image2.png
72
330
media_image2.png
Greyscale
(Kang, p. 5 "Algorithm 2. Prediction stage... pt (y0 = 1|x0) ← prediction of Mt for x0 [first/second/third prediction information output]... The posterior probabilities pt(y0 = 1|x0) obtained from the selected models are aggregated through a weighted average using the weights wt… The weights wt [a first/second/third weight] are obtained at the training stage to determine the degree of contribution of Mt..."; applying wt to pt from Mt, i.e. w1 applied to p1 of M1, w2 applied to p2 of M2, and w3 applied to p3 of M3)
identify whether the home appliance has the potential defect based on the weighted first prediction information of the first prediction information to which the first weight is applied, the weighted second prediction information of the second prediction information to which the second weight is applied, and weighted third prediction information of the third prediction information to which the third weight is applied, (Kang, p. 5 "… where a larger f(x0) indicates a higher probability of product failure [whether the home appliance has the potential defect] at the market"; f(x0) is a final prediction value, which is the sum of w1p1, w2p2 and w3p3 [the weighted first/second/third prediction information])
… wherein the first learning network model and the second learning network model are trained to predict whether the home appliance has the potential defect based on the integrated learning data, and (Kang, p. 4 "Suppose that an incomplete training data-set D = {(xi , yi)} i=1..n consisting of n instances… A prediction model Mt is trained on Dt using a pre-determined learning algorithm."; M1 and M2 are trained to predict product failure based on dataset D [integrated learning data])
wherein the integrated learning data is data acquired by integrating process defect data acquired during a manufacturing step of the home appliance and service defect data acquired in a service step of servicing the home appliance. (Kang, p. 1 "To address failures occurring at the market, prediction models can be built from a mashup of production and customer service data, [the integrated learning data acquired during manufacturing and servicing] in which various relevant factors from production data and failure records in customer service data comprise the input and output variables for these models, respectively"; p. 7 "The sensor measurements in the manufacturing process and inspection results obtained during the production stage were used to define the input variables."; production and manufacturing often used interchangeably)
Kang does not teach, but Feurer teaches: wherein the first learning network model is a supervised learning network model, the second learning network model is an unsupervised learning network model and… (Feurer, p. 3 "We fit a GP model to the observations of each past run i and refer to these models as base models. They have posterior fi(x|Di), with mean and variance μi(x) and σ2i (x)respectively."; p. 3, 4 Ranking-Weighted Gaussian Process Ensemble "Our strategy here is to estimate the target function as a weighted combination of the predictions of each base model and the target model itself: f(x|D) = Σ wifi(x|Di). A model of this form is preferred for several practical reasons. First, this ensemble model remains a GP, and in particular)... f(x|D) ~N(Σ wiμi(x), Σ wi2σ2i(x))"; p. 2 "Observations from all runs are put on the same scale using an SVM_RANK model [a supervised learning network model] and then used in a single GP. Yogatama and Mann [2014] select similar past optimization runs based on the nearest neighbors [an unsupervised learning network model] in meta feature space. Observations from all similar runs are then combined in a single GP."; GP is a surrogate for any model. Each of the base models is a GP and the ensemble is also a GP. The GP ensemble is a weighted combination of the predictions of each base model, which is equivalent to Kang's weighted predictions of Mt [weighted prediction information])
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Kang to incorporate the teachings of Feurer by including an ensemble model as a single Gaussian process model. Doing so would substantially reduce the time it takes to obtain near-optimal configurations, and is useful for warm-starting expensive searches or running quick re-optimizations. (Feurer, p. 1 " We develop an ensemble model that can incorporate the results of past optimization runs, while avoiding the poor scaling that comes with putting all results into a single Gaussian process model... Results... show that the ensemble can substantially reduce the time it takes to obtain near-optimal configurations, and is useful for warm-starting expensive searches or running quick re-optimizations.")
Kang and Feurer do not teach, but Springenberg teaches: An electronic device comprising: a communicator; a memory storing at least one instruction; and a processor configured to execute the at least one instruction stored in the memory, wherein the processor when executing the at least one instruction is configured to: , (Springenberg, p. 1, 1 Introduction "This includes multi-task BO, parallel optimization of deep residual networks, and deep reinforcement learning. An implementation of our method can be found at https://github.com/automl/RoBO."; the implementation and the source code of Bayesian Optimization inherently teaches all the computer components)
… the third learning network model is one of a reinforcement learning network model or a transfer learning network model, (Springenberg, p. 1 "This includes multi-task BO, parallel optimization of deep residual networks, and deep reinforcement learning. [a reinforcement learning network model]")
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Kang and Feurer to incorporate the teachings of Springenberg by including the implementation of BO and multi-task BO with 21 tasks including reinforcement learning. Doing so would allow parallel optimization of deep neural networks and deep reinforcement learning show the power and flexibility of this approach. (Springenberg, p. 1 "Experiments including multi-task Bayesian optimization with 21 tasks, parallel optimization of deep neural networks and deep reinforcement learning show the power and flexibility of this approach.")
Kang, Feurer and Springenberg do not teach, but Wistuba teaches: update the first learning network model and the second learning network model based on an accumulated amount of integrated learning data and a period of a processing step, and (Wistuba, p. 57, 6 Scalable Gaussian process transfer surrogate framework "We derive our scalable Gaussian process transfer surrogate framework (SGPT) by combining all M + 1 Gaussian processes into a weighted, normalized sum as sketched in Fig. 3."; p. 59, 6.1 Product of experts "In this section we want to formally derive values for the parameters w and v for the scalable Gaussian process transfer surrogate framework (Algorithm 2)... μ(x*) = ...(37) which is a sum of means, weighted by the product of βi and the individual precisions... βi = 1/M+1, ∀i = 1,..., M+1, (38)... To sum up, generalized products of experts are an instance of scalable transfer surrogates when setting wi = βiσi-2(x*) (39)... as weight parameters [update the 1st/2nd learning network model] in Algorithm 2."; wi is calculated based on βi and σi, where i = 1,..., M+1 [based on the accumulated amount of integrated learning data (i=1:M) and the period of a processing step
PNG
media_image3.png
455
1222
media_image3.png
Greyscale
(i=M+1)])
change the first weight applied to the first learning network model, the second weight applied to the second learning network model, and the third weight applied to the third learning network model, based on the accumulated amount of the integrated learning data and the period of the processing step, (Wistuba, p. 57, 6 Scalable Gaussian process transfer surrogate framework "We derive our scalable Gaussian process transfer surrogate framework (SGPT) by combining all M + 1 Gaussian processes into a weighted, normalized sum as sketched in Fig. 3."; p. 59, 6.1 Product of experts "In this section we want to formally derive values for the parameters w and v for the scalable Gaussian process transfer surrogate framework (Algorithm 2)... μ(x*) = ...(37) which is a sum of means, weighted by the product of βi and the individual precisions... βi = 1/M+1, ∀i = 1,..., M+1, (38)... To sum up, generalized products of experts are an instance of scalable transfer surrogates when setting wi = βiσi-2(x*) (39)... as weight parameters [change the 1st/2nd/3rd weight applied to the 1st/2nd/3rd learning network model] in Algorithm 2."; wi is calculated based on βi and σi, where i = 1,..., M+1 [based on the accumulated amount of integrated learning data (i=1:M) and the period of a processing step (i=M+1)])
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Kang, Feurer, Springenberg to incorporate the teachings of Wistuba by including Scalable Gaussian Process Transfer Surrogate (SGPT). Doing so would allow to make use of Gaussian processes in a scalable way, and achieve feasible and acceptable run time. (Wistuba, p. 65, 8.4 Scalability experiment "Our proposed surrogate model SGPT makes use of Gaussian processes in a scalable way... In an empirical evaluation we show that our method is nevertheless feasible while the state-of-theart exceeds an acceptable run time... The results are visualized in Fig. 4. At a point where the Full GP needs almost 7 hours of training, SGPT needs only about 2 minutes.")
PNG
media_image4.png
260
656
media_image4.png
Greyscale
Kang, Feurer, Springenberg and Wistuba do not teach, but Park teaches: provide a visual notice or an auditory notice comprising information on which component of the home appliance is predicted to have a potential defect, (Park, p. 24 "And a replacement notification module for informing the user terminal [provide a notice] to replace the lifetime calculated by the lifetime calculation module for each module..."; p. 3 "the lifetime of each of the plurality of illumination modules 31 is calculated and stored and the information related thereto is transmitted to the user terminal 7"; p. 14 "The lifetime information receiving module 714b is configured to receive lifetime information of the lighting module 31... And is transmitted to the life time information display module 714c to be displayed on the user terminal 7. [provide a visual notice]")
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Kang, Feurer, Springenberg and Wistuba to incorporate the teachings of Park by including the notice of potential defects. Doing so would allow the user to easily and quickly recognize the replacement timing, and the management of the smart lighting device can be performed smoothly. (Park, p. 9 "The replacement notification module 146 is configured to notify the user of the necessity of replacement when the calculated remaining life of each of the lighting modules 31 falls below a predetermined reference level. Can be informed. Therefore, the user can easily and quickly recognize the replacement timing of each lighting module 31, and the management of the smart lighting device 3 can be performed smoothly.")
Claims 10 and 19 recite substantially the same limitation as claim 1, therefore the rejection applied to claim 1 also apply to claims 10 and 19. In addition, Springenberg teaches: A non-transitory computer-readable medium storing a computer-readable instructions, which when executed by the processor of an electronic device control the electronic device to perform a method comprising: (Springenberg, p. 1, 1 Introduction "This includes multi-task BO, parallel optimization of deep residual networks, and deep reinforcement learning. An implementation of our method can be found at https://github.com/automl/RoBO."; the implementation and the source code of Bayesian Optimization inherently teaches all the computer components)
The rationale for combining the teachings of Kang, Feurer and Springenberg is the same as set forth in the rejection of claim 1.
In regard to claims 4 and 13, Kang does not teach, but Feurer teaches: wherein the processor when executing the at least one instruction is configured to: based on the accumulated amount of the integrated learning data being less than a threshold amount, increase the second weight relative to the first weight, and (Feurer, p. 4 "We draw S such samples: li,s ~ L(fi,Dt) for s =1,... ,S and i = 1,... ,t. Weight for model i is then computed as wi = 1/S Σ 1(i=argmin li',s)(3)... We prevent weight dilution by discarding models that are substantially worse than the target model. Model i is discarded from the ensemble if the median of its loss samples li,s is greater than the 95th percentile of the target loss samples lt,s."; if loss samples of a model is less than 95% of a target loss samples, the model is not discarded. Thus, the model is included in Eq.(3) for computing weights, i.e., the weight of the model is increased)
based on the accumulated amount of the integrated learning data being greater than or equal to the threshold amount, increase the first weight relative to the second weight. (Feurer, p. 4 "We draw S such samples: li,s ~ L(fi,Dt) for s =1,... ,S and i = 1,... ,t. Weight for model i is then computed as wi = 1/S Σ 1(i=argmin li',s)(3)... We prevent weight dilution by discarding models that are substantially worse than the target model. Model i is discarded from the ensemble if the median of its loss samples li,s is greater than the 95th percentile of the target loss samples lt,s."; if loss samples of a model is greater than 95% of a target loss samples, discarding the model will increase the weights of other models, because the model is not included in Eq.(3) for computing weights)
The rationale for combining the teachings of Kang and Feurer is the same as set forth in the rejection of claim 1.
In regard to claims 5 and 14, Kang teaches: wherein the service defect data comprises at least one of information on a component identified as defective among a plurality of components constituting the home appliance, a service period of the home appliance, a production date of the home appliance, or a production area of the home appliance, and (Kang, p. 7 "Meta Variables… Product ID… Production date... [a production date of the home appliance] Figure 3. Example of data-set used in case study.")
PNG
media_image5.png
272
888
media_image5.png
Greyscale
wherein the process defect data comprises at least one of measurement information of the home appliance or information on the component identified as defective among the plurality of components constituting the home appliance. (Kang, p. 7 "The sensor measurements [measurement information] in the manufacturing process and inspection results obtained during the production stage were used to define the input variables."; p.2 "For a failure prediction problem, each product is regarded as an instance for predictive modelling, where the factors affecting product quality, such as process parameters, sensor measurements, inspection results and production environment, are modelled to predict the quality indicators for products.")
In regard to claims 6 and 15, Kang teaches: wherein the measurement information of the home appliance includes a plurality of measurement information of different categories. (Kang, p. 7 "Each data-set contains a number of instances for one of the selected product groups (P1, P2, P3). [different categories]")
PNG
media_image6.png
248
452
media_image6.png
Greyscale
In regard to claims 7 and 16, Kang teaches: wherein the processor when executing the at least one instruction is configured to: cluster a plurality of learning data used for training the first learning network model and the second learning network model, (Kang, p. 7 "Table 1. Data-sets used in case study… The details and basic statistics of the data-sets are summarised in Table 1."; see Table 1, clustering data for training)
divide the plurality of learning data as groups of learning data for the respective different categories, and (Kang, p. 7 "Each data-set contains a number of instances for one of the selected product groups (P1, P2, P3). [groups of learning data for the respective different categories]")
train the first learning network model and the second learning network model based on the plurality of learning data for the groups of learning data. (Kang, p. 4 "Suppose that an incomplete training data-set D = {(xi , yi)} i=1..n consisting of n instances… A prediction model Mt is trained on Dt [train the first learning network model and the second learning network model] using a pre-determined learning algorithm.")
In regard to claims 9 and 18, Kang does not teach, but Feurer teaches: wherein the processor when executing the at least one instruction is configured to: acquire the first weight and the second weight based on a Bayesian Optimization algorithm. (Feurer, p. 3 "We use a ranking loss to compute weights, and so call this method the ranking-weighted Gaussian process ensemble (RGPE)."; p. 5 "The RGPE retains the distributional properties of a GP, and so can be used with standard acquisition functions for Bayesian optimization. More specifically, instead of μ we use μ(x) = Σwiμi(x), and instead of σ we use σ2(x) = Σw2σ2(x), to compute EI. In many applications of Bayesian optimization we have the ability to run multiple function evaluations in parallel, and parallelization is critical for the scalability of Bayesian optimization.")
The rationale for combining the teachings of Kang and Feurer is the same as set forth in the rejection of claim 1.
Response to Arguments
Applicant's arguments with respect to the rejection of the claims under 35 U.S.C. 101 have been fully considered but they are not persuasive:
Applicant argues: (see p. 11) Features such as "provide a visual notice or an auditory notice comprising information on which component of the home appliance is predicted to have a potential defect," "update the first learning network model and the second learning network model based on an accumulated amount of integrated learning data and a period of a processing step," "change the first weight applied to the first learning network model, the second weight applied to the second learning network model, and the third weight applied to the third learning network model, based on the accumulated amount of the integrated learning data and the period of the processing step," "wherein the first learning network model and the second learning network model are trained to predict whether the home appliance has the potential defect based on integrated learning data," and "wherein the integrated learning data is data acquired by integrating process defect data acquired during a manufacturing step of the home appliance and service defect data acquired in a service step of servicing the home appliance" could not practically be performed by the human mind.
Examiner answers: The “update” and “change” steps are mental processes. A person can update a model by changing a hyperparameter or parameter, and a person can also change a weight applied to a model. If a claim recites a limitation that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper, the limitation falls within the mental processes grouping, and the claim recites an abstract idea – MPEP 2106.04(a)(2)(III)(B).
Applicant argues: (see p. 12-13) Even assuming, arguendo, independent claim 1 recites a judicial exception, the claimed subject-matter is integrated into a practical application… In other words, the claimed subject matter provides improvements over other manufacturing engineering, artificial intelligence, and machine learning technologies, because: a potential defect can be predicted in consideration of both process defect data and defect data according to a use by a user. Also, a prediction model is not fixed, but a prediction model can be modified in consideration of the characteristics of defect data for the respective production steps. In addition, accuracy and reliability of prediction can be improved by using different types of learning network models…
Examiner answers: As explained above, the “update the first learning network model and the second learning network model based on an accumulated amount of integrated learning data and a period of a processing step, and” and “change the first weight applied to the first learning network model, the second weight applied to the second learning network model, and the third weight applied to the third learning network model, based on the accumulated amount of the integrated learning data and the period of the processing step,” are mental processes. If the claim is directed to a judicial exception, it cannot provide an improvement. See MPEP 2106.05(a): “It is important to note, the judicial exception alone cannot provide the improvement.”
Applicant's arguments with respect to the rejection of the claims under 35 U.S.C. 103 have been fully considered but they are moot:
Applicant argues: (see p. 16) Applicant respectfully submits that Feurer is silent as to "update the first learning network model and the second learning network model based on an accumulated amount of integrated learning data and a period of a processing step" as claimed… , Feurer is silent as to "change the first weight applied to the first learning network model, the second weight applied to the second learning network model, and the third weight applied to the third learning network model, based on the accumulated amount of the integrated learning data and the period of the processing step" as claimed.
Examiner answers: The arguments do not apply to the references (Wistuba) being used in the current rejection.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SU-TING CHUANG whose telephone number is (408)918-7519. The examiner can normally be reached Monday - Thursday 8-5 PT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Usmaan Saeed can be reached at (571) 272-4046. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SU-TING CHUANG/Examiner, Art Unit 2146