DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submissions filed on 07/21st/2025 (amendment) and 08/25th/2025 (RCE) have been entered.
Response to Arguments
Applicant’s arguments, see REMARKS pages 9-12 filed 07/21st/2025,
regarding the 35 USC § 103 rejection of claims 1-20 have been considered but they are moot in light of the new rejection below.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-9, 11, 14-19 are rejected under 35 U.S.C. 103 as being unpatentable over BIGAJ (US20190130303A1) in view of Lin (US 20120284213 A1), further in view of Maughan (US20170330109A1), further in view of Schner (US10839301B1), further in view of Calmon (US20190287026A1), and further in view of HEWAGE (US20210365114A1).
Regarding claim 1, BIGAJ teaches determining, by the one or more processors, an original quality evaluation value for the trained original machine learning model using a first set of feedback data ([0020-0021] The term 'model quality metric' may denote -in the mathematical sense- a distance function between rear, measured values and values generated out of a model comprising a plurality of parameters. The matrix may, e.g., be related to an accuracy of the method if compared to really measured values or to an error rate. Other model quality metrics may be possible. The term 'model quality value' may denote an individually measured or experienced value. The value may relate to the model quality metric).
in response to determining that the quality evaluation value is below a quality threshold value, triggering, by the one or more processors, a retraining process for the original machine learning model, the retraining process comprising a first retraining phase for a first machine learning model ([0029-0030] Basically, the decision about a retraining for machine learning model may be made completely autonomous. No human invention may be required for determining when a retraining of a machine learning model should be performed. The required threshold level(s) may constantly be evaluated based on real data from a machine learning production environment. Advantageously, different model quality metrics may be used as part of the proposed method and system. Thus, a quality of the machine learning model may be evaluated under different aspects, i.e., on the different metrics. Depending on which metric the model quality assessment may be performed, the threshold value for a decision about a required retraining may be adjusted automatically and in line with an enlarged training data set, reflecting the original training data as well as the feedback data from the production system).
performed by the first service provider ([0057] Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with computer system/server 500 include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programable consumer electronics, network PCs, minicomputer systems, mainframe computer systems, and distributed cloud computing environments that include any of the above systems or devices, and the like. The examiner interprets a distributed cloud computing environment to be the claimed service provider).
However, BIGAJ is not relied upon to explicitly teach receiving, by one or more processors, a trained original machine learning model from a first service provider, wherein the first service provider initially performed the training and setup on the trained original machine learning model, including related parameters and a set of training data with which the trained original machine learning model has been trained. BIGAJ is also not relied upon to explicitly teach a second retraining phase for a second machine learning model. BIGAJ is also not relied upon to explicitly teach wherein the first retraining phase does not include retraining cross-validation values for the original machine learning model. BIGAJ is also not relied upon to explicitly teach the second retraining phase uses different training folds, from the set of training data, than the first retraining phase, wherein all records are used from a second validation fold. BIGAJ is also not relied upon to explicitly teach wherein all the records are, at least, a part of the original set of training data and the first set of feedback data. BIGAJ is also not relied upon to explicitly teach the retraining process is performed by the first service provider to ensure consistency. BIGAJ is also not relied upon to explicitly teach storing a first set of parameters associated with the first training machine learning model for a future retraining process, wherein the first set of parameters are generated as a result of the retraining process and include, at least, hyper-parameters describing weights and activation functions of a neural network.
Furthermore, Calmon teaches receiving, by one or more processors, a trained original machine learning model from a first service provider, wherein the first service provider initially performed the training and setup on the trained original machine learning model, including related parameters and a set of training data with which the trained original machine learning model has been trained ([0005] receiving a learning model that is generated by the learning service provider based on the initial training data set, executing the received learning model using the initial test data set as input to verify whether the learning model satisfies a predefined performance threshold, and in response to verifying the learning model satisfies the predefined performance threshold, outputting information about the verification to computing node that is associated with the data provider of the data set. The examiner notes that BIGAJ and Calmon are both considered to be reasonably analogous because they are in the field of machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified BIGAJ’s model training to incorporate receiving, by one or more processors, a trained original machine learning model from a first service provider, wherein the first service provider initially performed the training and setup on the trained original machine learning model, including related parameters and a set of training data with which the trained original machine learning model has been trained as taught by Calmon [0005] to ensure fairness and accountability during a mutually exchanged learning process through interaction with an immutable ledger (such as a blockchain) [0001]).
Furthermore, Lin teaches a second retraining phase for a second machine learning model ([0106] For example, an updateable predictive model that has undergone a third iteration of updating with a third new training data set (i.e., was retrained with the third new training data set) is associated with an accuracy score that was determined using the updateable predictive model after having been retrained with the second new training data set. The examiner notes that Lin teaches retraining a model multiple times with new training datasets. The examiner also noted that BIGAJ and Lin are both considered to be analogous because they are in the same field of machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified BIGAJ’s model retraining to incorporate a second retraining phase for a second machine learning model as taught by Lin [0106] to correct any model drift away from the initial training [0147]).
Furthermore, Maughan teaches wherein the first retraining phase does not include retraining cross-validation values for the original machine learning model ([0076] In one embodiment, the retrain module 302 retrains a model using new training data obtained from a user. The examiner notes that Maughan teaches retraining the model using new data and replace the model [0075] or create a new model [0055]. The examiner further notes that BIGAJ and Maughan are both considered to be analogous because they are in the same field of neural networks. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified BIGAJ’s model retraining to incorporate wherein the first retraining phase does not include retraining cross-validation values for the original machine learning model as taught by Maughan [0076] to correct or attempt to correct any data drift detected [0076])
Furthermore, Lin teaches the second retraining phase uses different training folds, from the set of training data, than the first retraining phase, wherein all records are used from a second validation fold ([0106] For example, an updateable predictive model that has undergone a third iteration of updating with a third new training data set (i.e., was retrained with the third new training data set) is associated with an accuracy score that was determined using the updateable predictive model after having been retrained with the second new training data set. The examiner notes that Lin teaches retraining a model multiple times with new training datasets. Furthermore, the examiner interprets the new training datasets as taught by Lin to be the claimed second validation fold. The examiner also noted that BIGAJ and Lin are both considered to be analogous because they are in the same field of machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified BIGAJ’s model retraining to incorporate the second retraining phase uses different training folds, from the set of training data, than the first retraining phase, wherein all records are used from a second validation fold as taught by Lin [0106] to correct any model drift away from the initial training [0147]).
Furthermore, Schner teaches wherein all the records are, at least, a part of the original set of training data and the first set of feedback data ([Col. 6, Line 11-16] In some examples, once the initial model is constructed, feedback from users may be utilized to refine the model by including new examples labelled by customer feedback into the training data, and retraining the model. The model is then trained based upon these examples. The examiner notes that Schner teaches retraining a model with retraining datasets that are made up of training data and feedback data. The examiner also notes that BIGAJ and Schner are both considered to be analogous because they are in the same field of machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified BIGAJ’s model retraining to incorporate wherein all the records are, at least, a part of the original set of training data and the first set of feedback data as taught by Schner [Col. 6, Line 11-16] to refine the model using feedback data [Col. 6, Line 12-14]).
Furthermore, Schner teaches the retraining process is performed by the first service provider to ensure consistency ([Col. 10, Line 40-46] Further, while only a single machine is illustrated, the term "machine" shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein, such as cloud computing, software as a service (SaaS), other computer cluster configurations. The examiner interprets “any methodologies discussed herein” to include retraining the model and “cloud computing or software as a service (SaaS)” to be provided by a service provider. The examiner further interprets “collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein” to ensure consistency and uninterrupted service. The examiner also notes that BIGAJ and Schner are both considered to be analogous because they are in the same field of machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified BIGAJ’s model retraining to incorporate the retraining process is performed by the first service provider to ensure consistency as taught by Schner [Col. 10, Line 40-46] to allow a collection of machines to jointly execute any task(s) [Col. 10, Line 42-44]).
Furthermore, HEWAGE teaches storing a first set of parameters associated with the first training machine learning model for a future retraining process, wherein the first set of parameters are generated as a result of the retraining process and include, at least, hyper-parameters describing weights and activation functions of a neural network ([0581] a neural calibration data component 616b for storing and/or retrieving any necessary calibration/retraining data and/or network parameters that may be required by machine learning component 618 and/or device ( s )/apparatus 108a-108p in relation to calibrating/retraining or performing continuous learning/tracking of one or more of said device(s) 108a-108p with a neural interface. The examiner notes that HEWAGE teaches using a neural calibration data component that is used to store parameters necessary for training/re-training of a machine learning model. The examiner also notes that BIGAJ and HEWAGE are both considered to be reasonably analogous because they are in the same field of machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified BIGAJ’s model retraining to incorporate storing a first set of parameters associated with the first training machine learning model for a future retraining process, wherein the first set of parameters are generated as a result of the retraining process and include, at least, hyper-parameters describing weights and activation functions of a neural network as taught by HEWAGE [0581] to lower the learning rate for new training neural stimulus datasets [0575]).
Regarding claim 2, BIGAJ teaches wherein the first retraining phase further comprises: performing, by the one or more processors, a first k-fold cross-validation of the trained original machine learning model using the original set of training data and the first set of feedback data, wherein, from a first validation fold of the first k-fold cross-validation, skipping records that originate from said set of training data ([0048-0050] In a next step, 206, the machine learning model is deployed into production and feedback data are gathered, 206. The gathered feedback data are then combined with the original training data building a larger group of training data, i.e., the enlarged training data set. It may also be possible to only use the gathered feedback data as enlarged training data set in leaving out the original training data. A determination which new training data set to be used may be performed automatically---e.g., based on the number of gathered feedback data----or by a manual process. If a predefined number of gathered feedback data may become available within a predefined period of time, an automatic determination may be performed to only use the gathered feedback data. Based on this enlarged training data set a model evaluation is performed, 208, as scheduled-i.e., in regular time periods, after a predetermined amount of time or, after a predefined number of gathered feedback data has been collected. If the evaluation result is above the originally defined threshold value-case "N" of detzerminatio 210- no retraining is triggered. However, if the evaluation result is below the originally defined threshold value-case "Y"-a retraining is triggered 212. The retraining is done using, 214, the k-fold cross-validation method. The examiner notes the BIGAJ teaches combining training data and feedback data to create new training data to be used to retrain a machine learning model using k-fold cross-validation. The examiner also notes that BIGAJ teaches selecting to use or not to use the original training data as part of the new retraining data).
Regarding claim 3, BIGAJ teaches wherein the second retraining phase further comprises: performing, by the one or more processors, a second k-fold cross-validation of said trained original machine learning model using the original set of training data, the first set of feedback data, and a second set of feedback data, wherein the second k-fold cross-validation utilizes all records from a second validation fold ([0048-0050] In a next step, 206, the machine learning model is deployed into production and feedback data are gathered, 206. The gathered feedback data are then combined with the original training data building a larger group of training data, i.e., the enlarged training data set. It may also be possible to only use the gathered feedback data as enlarged training data set in leaving out the original training data. A determination which new training data set to be used may be performed automatically---e.g., based on the number of gathered feedback data----or by a manual process. If a predefined number of gathered feedback data may become available within a predefined period of time, an automatic determination may be performed to only use the gathered feedback data. Based on this enlarged training data set a model evaluation is performed, 208, as scheduled-i.e., in regular time periods, after a predetermined amount of time or, after a predefined number of gathered feedback data has been collected. If the evaluation result is above the originally defined threshold value-case "N" of detzerminatio 210- no retraining is triggered. However, if the evaluation result is below the originally defined threshold value-case "Y"-a retraining is triggered 212. The retraining is done using, 214, the k-fold cross-validation method. The examiner notes the BIGAJ teaches combining training data and feedback data to create new training data to be used to retrain a machine learning model using k-fold cross-validation. The examiner also notes that BIGAJ teaches [0032] that the dataset is constantly growing as new folds of feedback data are added to the existing training data that were used in prior k-fold cross-validation retraining rounds which would include the prior round’s training data and feedback data. The examiner also notes that BIGAJ [0024] teaches that the model goes through multiple rounds of k-fold cross-validation based continuous retraining).
Regarding claim 4, BIGAJ teaches wherein a third retraining phase and subsequent retraining phases are treated equally to the second retraining phase ([0048-0050] In a next step, 206, the machine learning model is deployed into production and feedback data are gathered, 206. The gathered feedback data are then combined with the original training data building a larger group of training data, i.e., the enlarged training data set. It may also be possible to only use the gathered feedback data as enlarged training data set in leaving out the original training data. A determination which new training data set to be used may be performed automatically---e.g., based on the number of gathered feedback data----or by a manual process. If a predefined number of gathered feedback data may become available within a predefined period of time, an automatic determination may be performed to only use the gathered feedback data. Based on this enlarged training data set a model evaluation is performed, 208, as scheduled-i.e., in regular time periods, after a predetermined amount of time or, after a predefined number of gathered feedback data has been collected. If the evaluation result is above the originally defined threshold value-case "N" of detzerminatio 210- no retraining is triggered. However, if the evaluation result is below the originally defined threshold value-case "Y"-a retraining is triggered 212. The retraining is done using, 214, the k-fold cross-validation method. The examiner notes the BIGAJ teaches combining training data and feedback data to create new training data to be used to retrain a machine learning model using k-fold cross-validation. The examiner also notes that BIGAJ teaches [0032] that the dataset is constantly growing as new folds of feedback data are added to the existing training data that were used in prior k-fold cross-validation retraining rounds which would include the prior round’s training data and feedback data. The examiner also notes that BIGAJ [0024] and [0055] teaches that the model goes through multiple rounds of k-fold cross-validation based continuous retraining).
Regarding claim 5, BIGAJ teaches wherein the first retraining phase further comprises: building, by the one or more processors, k folds of a mixture of the original set of training data and the first set of feedback data such that, in each of the k folds, at least one feedback record from the first set of feedback data is present ([0036] According to a permissive embodiment of the method, the number of folds may be 3. This may be a default value and other fold numbers may be possible. However, using 3 folds allow for a good average building and the number of feedback data sets per fold may be high enough to split between learning data (about 70% to 80% per fold) and confirming data (about 20% to 30% per fold).
retraining, by the one or more processors, the original machine learning model using built k folds thereby generating a corresponding set of first machine learning models, wherein the corresponding set of first machine learning models corresponds to another one of the k folds used as retraining data ([0055] Depending on the metric type, the threshold value is then set either to UCL or LCL: If the matrix is directed to an error, the threshold value is set to LCL; if-on the other side-the matrix is directed to a model correctness (i.e., accuracy) the threshold value is set to UCL. As explained above, the threshold value is then used as a trigger level for a model ultra-retraining. If a current model evaluation metric value is below the threshold value, a model retraining is triggered. Also, the newly trained model is evaluated and the loop starts all over again. The examiner notes that BIGAJ teaches looping through iterations of retraining a model based on k-fold cross-validation based on a metric quality measurement and each retraining iteration creating a newly retrained model based on a new retraining dataset. The examiner considers the newly retrained models and the retrained data used to retrain each one of them to be claimed set of first machine learning models corresponding to another one of the k folds used as retraining data).
Regarding claim 6, BIGAJ teaches wherein the first retraining phase further comprises: determining, by the one or more processors, a set of first partial quality evaluation values, wherein each instance within the set of first partial quality evaluation values corresponds to a respective instance within the set of first machine learning models ([0030] Advantageously, different model quality metrics may be used as part of the proposed method and system. Thus, a quality of the machine learning model may be evaluated under different aspects, i.e., on the different metrics. Depending on which metric the model quality assessment may be performed, the threshold value for a decision about a required retraining may be adjusted automatically and in line with an enlarged training data set, reflecting the original training data as well as the feedback data from the production system. The examiner notes the BIGAJ taches calculating and evaluating a model quality metric after each retraining iteration to determine if the model needs further retraining or not as shown in Fig. 1 and Fig. 2).
Regarding claim 7, BIGAJ teaches wherein the first retraining phase further comprises: determining, by the one or more processors, a first quality evaluation value as an average value of the first partial quality evaluation values ([0026] In summary, cross-validation combines (averages) measures of fit (prediction error) to derive a more accurate estimate of model prediction performance).
Regarding claim 8, BIGAJ teaches wherein the second retraining phase further comprises: expanding, by the one or more processors, the k folds by at least one record of a second set of feedback data ([0036] According to a permissive embodiment of the method, the number of folds may be 3. This may be a default value and other fold numbers may be possible. However, using 3 folds allow for a good average building and the number of feedback data sets per fold may be high enough to split between learning data (about 70% to 80% per fold) and confirming data (about 20% to 30% per fold. The examiner notes that BIGAJ teaches [0032] that the dataset is constantly growing as new folds of feedback data are added to the existing training data that were used in prior k-fold cross-validation retraining rounds which would include the prior round’s training data and feedback data. The examiner also notes that BIGAJ [0024] teaches that the model goes through multiple rounds of k-fold cross-validation based continuous retraining).
retraining, by the one or more processors, each of the first set of machine learning models using the expanded set of k folds, thereby generating a corresponding set of second machine learning models each of which corresponds to another one of the k folds used as retraining data ([0055] Depending on the metric type, the threshold value is then set either to UCL or LCL: If the matrix is directed to an error, the threshold value is set to LCL; if-on the other side-the matrix is directed to a model correctness (i.e., accuracy) the threshold value is set to UCL. As explained above, the threshold value is then used as a trigger level for a model ultra-retraining. If a current model evaluation metric value is below the threshold value, a model retraining is triggered. Also, the newly trained model is evaluated and the loop starts all over again. The examiner notes that BIGAJ teaches looping through iterations of retraining a model based on k-fold cross-validation based on a metric quality measurement and each retraining iteration creating a newly retrained model based on a new retraining dataset. The examiner considers the newly retrained models and the retraining data used to retrain each of the models to be the claimed set of second machine learning models corresponding to another one of the k folds used as retraining data, or the third, or the fourth, or the fifth, and so on set depending on which training iteration is involved).
Regarding claim 9, BIGAJ teaches wherein the second retraining phase further comprises: determining, by the one or more processors, a set of second partial quality evaluation values, each instance within the set of second partial quality evaluation values corresponds to a respective instance within the set of second machine learning models ([0030] Advantageously, different model quality metrics may be used as part of the proposed method and system. Thus, a quality of the machine learning model may be evaluated under different aspects, i.e., on the different metrics. Depending on which metric the model quality assessment may be performed, the threshold value for a decision about a required retraining may be adjusted automatically and in line with an enlarged training data set, reflecting the original training data as well as the feedback data from the production system. The examiner notes the BIGAJ taches calculating and evaluating a model quality metric after each retraining iteration to determine if the model needs further retraining or not as shown in Fig. 1 and Fig. 2).
determining, by the one or more processors, a second quality evaluation value as an average value of the second partial quality evaluation values ([0026] In summary, cross-validation combines (averages) measures of fit (prediction error) to derive a more accurate estimate of model prediction performance. The examiner notes that BIGAJ teaches calculating an average quality metric after each round of retraining for all folds of the k-fold cross-validation continuous retraining phase).
Regarding claim 11, BIGAJ teaches wherein said machine learning models are selected from the group consisting of: a multiclass classifier, a binary classifier, and a regression algorithm unit ([0042] According to an additionally advantageous embodiment of the method, the machine learning model may be selected out of the group comprising a classification model or algorithm-in particular support vector machine or a regression method or algorithm-in particular linear regression).
Claims 14-16 are rejected based upon the same rationale as the rejection of claims 1-3 since they are the non-transitory computer-readable storage medium claims corresponding to the method claims.
Claims 17-19 are rejected based upon the same rationale as the rejection of claims 1-3 since they are the system claims corresponding to the method claims.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over BIGAJ (US20190130303A1) in view of Lin (US 20120284213 A1), further in view of Maughan (US20170330109A1), further in view of Refaeilzadeh (Cross-Validation; Encyclopedia of Database Systems. pp 532–538 - 2009).
Regarding claim 10, BIGAJ teaches The method according to claim 9. However, BIGAJ is not relied upon to teach in response to determining that the second quality evaluation value is better than said first quality evaluation value, deploying, by the one or more processors, the second machine learning model in place of the original machine learning model.
On the other hand, Refaeilzadeh teaches in response to determining that the second quality evaluation value is better than said first quality evaluation value, deploying, by the one or more processors, the second machine learning model in place of the original machine learning model ([Page 536, Right Col., Section: Model Selection] Alternatively cross-validation may be used to compare a pair of learning algorithms. This may be done in the case of newly developed learning algorithms, in which case the designer may wish to compare the performance of the classifier with some existing baseline classifier on some benchmark dataset, or it may be done in a generalized model-selection setting. In generalized model selection one has a large library of learning algorithms or classifiers to choose from and wish to select the model that will perform best for a particular dataset. In either case the basic unit of work is pair-wise comparison of learning algorithms. The examiner notes the Refaeilzadeh teaches the selection of a model in response to a pair-wise comparison of the performance of two or more models. The examiner interprets the model selection by a model designer as taught by Refaeilzadeh to be the claimed model deployment. The examiner also notes that BAGAJ and Refaeilzadeh are considered to be analogous because they are in the same field of machine learning cross-validation based training. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified BAGAJ’s model training and selection method to incorporate in response to determining that the second quality evaluation value is better than said first quality evaluation value, deploying, by the one or more processors, the second machine learning model in place of the original machine learning model as taught by Refaeilzadeh [Page 536, Right Col., Section: Model Selection] to compare multiple learning algorithms [Page 536, Right Col., Section: Model Selection]).
Claims 12-13, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over BIGAJ (US20190130303A1) in view of Lin (US 20120284213 A1), further in view of Maughan (US20170330109A1), further in view of Feng (US20190242733A1).
Regarding claim 12, BIGAJ teaches The method according to claim 1. However, BIGAJ is not relied upon to teach wherein said machine learning models are neural networks.
However, Feng teaches wherein said machine learning models are neural networks ([0067] A machine-learning algorithm, may be used to classify the 360-degree data from 100 experiments using one or more machine learning methods including a Monte Carlo cross-validation method, k-Nearest Neighbor method, Support Vector Machine method, Random Forests method, and any Deep Learning methods such as Artificial Neural Network and Convolutional Neural Network. The examiner notes that BAGAJ and Feng are considered to be analogous because they are in the same field of machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified BAGAJ’s machine learning model to incorporate wherein said machine learning models are neural networks as taught by Feng [0067] to help the system make intelligent decisions with limited human assistance).
Regarding claim 13, BIGAJ teaches The method according to claim 1. However, BIGAJ is not relied upon to teach wherein said machine learning models are convolutional neural networks.
However, Feng teaches wherein said machine learning models are convolutional neural networks ([0067] A machine-learning algorithm, may be used to classify the 360-degree data from 100 experiments using one or more machine learning methods including a Monte Carlo cross-validation method, k-Nearest Neighbor method, Support Vector Machine method, Random Forests method, and any Deep Learning methods such as Artificial Neural Network and Convolutional Neural Network. The examiner notes that BAGAJ and Feng are considered to be analogous because they are in the same field of machine learning. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified BAGAJ’s machine learning model to incorporate wherein said machine learning models are convolutional neural networks as taught by Feng [0067] to build a network architecture that is efficient at deep learning, data recognition, and classification).
Claim 20 is rejected based upon the same rationale as the rejection of claim 12 since it is the system claim corresponding to the method claim.
Conclusion
The following reference have been determined to be related to the application, but were not applied in any specific rejection. They are nonetheless listed below for reference.
FAN (US 2008/0195577 Al)
“FAN teaches a method for automatically and adaptively determining query execution plans for parametric queries“
ZENG - (US 2021/0178432 Al)
“ZENG teaches a trash sorting and recycling method”
Fitz (US20160302014A1)
“Fitz teaches neural network-driven frequency translation for hearing assistance devices”
Kiraly (US20180129900A1)
“Kiraly teaches a machine-learnt classifier for more anonymous data transfer”
Cheng (US 10,929,392 B1)
“Cheng teaches machine learning techniques for generating realistic question - answer (QA) pairs for populating an initial community ask feature of electronic store item detail pages”
Zhao (US 2019/0354261 A1)
“Zhao teaches a method for creating a visual representation of data”
Xu (US 2013/0050503 Al)
“Xu teaches a mean observer score prediction using a trained semi-supervised learning regressor”
Shriver (US 2016/0223506 Al)
“Shriver teaches a method for monitoring crop health of a geographic region”
Tan (US20200251100A1)
“Tan teaches training a model on data that corresponds to a first domain and retraining the model on data that corresponds to a different domain”
Dempsey (US20020147754A1)
“Dempsey teaches training a classifier on a data vector and retraining the model on a second data vector”
Szanto (2020/0286002 Al)
“Szanto teaches dynamically retraining a machine learning model”
Cricri (US 2019/0311259 Al)
“Cricri teaches re-training a neural network on portions of content data ”
Vanwinckelen - On Estimating Model Accuracy with Repeated Cross-Validation – 2013
“Vanwinckelen argues against repeated cross-validation in certain cases.”
Yang - An Ensemble Extreme Learning Machine for Data Stream Classification – 2018
“Yang teaches an ensemble extreme learning machine for fast speed classification”
Zarei - Retraining Mechanism for On-Line Peer-to-Peer Traffic Classification – 2013
“Zarei teaches enhancing ML classification through training quality and recency”
Brownlee - A Gentle Introduction to k-fold Cross-Validation – 2018
“Brownlee teaches the uses and benefits of k-fold cross-validation”
Thomas (US20200184380A1)
“Thomas teaches a method for iteratively repeating the training of a neural network model until a customer-defined constraint is met”
StackExchange “Training on the Full Dataset after Cross-Validation?” Cross Validated, 5 June 2011, https://stats.stackexchange.com/questions/11602/training-on-the-full-dataset-after-cross-validation.
“Teaches cross-validation being used for model selection, specifically using cross-validation to estimate performance using a method”
Gupta, Prashant. “Cross-Validation in Machine Learning.” Medium, Towards Data Science, 5 June 2017, https://towardsdatascience.com/cross-validation-in-machine-learning-72924a69872f.
“Gupta teaches different cross-validation methods including k-fold cross validation”
Zhu, Ruijin, Weilin Guo, and Xuejiao Gong. "Short-term photovoltaic power output prediction based on k-fold cross-validation and an ensemble model." Energies 12.7 (2019): 1220.
“Zhu teaches the use of an ensemble model with k-fold cross validation used to train and determine the performance of the submodels. The prediction results of the submodels are merged”
Deshpande (US20150178638A1)
“Deshpande teaches dynamically retraining a predictive model using live training data that is different from the training data that was used to train the original model”
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAMCY ALGHAZZY whose telephone number is (571)272-8824. The examiner can normally be reached Monday-Friday between 9AM and 6PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, OMAR FERNANDEZ RIVAS can be reached on (571)272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHAMCY ALGHAZZY/Examiner, Art Unit 2128
/OMAR F FERNANDEZ RIVAS/Supervisory Patent Examiner, Art Unit 2128