Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 09/26/2025 has been entered.
Claim Objections
Claims 12-15 is objected to because of the following informalities: claims 12-15 recites the limitation " non-volatile computer program product" in line 1. There is insufficient antecedent basis for this limitation in the claim. It suggested to amend to “the computer program product”. Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Independent claims 1, and 6:
Claims 1 is drawn to “a method”, and claim 6 is drawn to “a system”, therefore each of these claim groups falls under one of four categories of statutory subject matter (process/method, machines/products/apparatus, manufactures, and compositions of matter).
Step 2A, Prong 1:
Claims 1, and 6 are directed to a judicially recognized exception of an abstract idea without significantly more. Each of claims 1, and 6 recites limitations “receiving an algorithm and a plurality of data sets within a secure computing node”, “aggregating the plurality of data sets into a superset”, “subdividing the superset into a first plurality of subsets”, “training a first plurality of weak algorithms on each of a corresponding of the first plurality of subsets”, “aggregating the first plurality of weak algorithms into a first strong algorithm”, “generating a gold standard inversion model”. “dividing the superset into a sensitive subset of data and a non- sensitive subset of data”, “generating an actual risk in terms of inputs needed to generate an inference”, and “improving the exfiltration performance of the first strong algorithm by adding noise to the superset responsive to when the actual risk is above a risk threshold” that under its broadest reasonable interpretation, enumerates a mental evaluation and a mathematical concept. Other than reciting a generic “at least computer” (Claims 1, and 6), nothing in the claims preclude the steps from practically being performed in the human mind. For example, other than the “at least one processor” language, the claims encompass a user visually and manually aggregating the plurality of data sets into a superset, subdividing the superset into a first plurality of subsets, training a first plurality of weak algorithms on each of a corresponding of the first plurality of subsets, and aggregating the first plurality of weak algorithms into a strong algorithm are nothing more than abstract mental and mathematical concepts (See MPEP 2106.04(a)(2)(I)(III)).
Step 2A, Prong 2:
Claim 1 do not recite any additional elements/or steps that would integrate the abstract idea into a practical application. However, claims 6 recites additional elements “one or more computer programs” to execute the computer program instructions (Claim 6). The computer readable storage media and the computer processor are recited at a high level of generality (i.e., as generic computer components performing generic computer functions to store and to process data respectively). These generic computer functions are no more than mere instructions to apply the exception using generic computer components. The combination of these additional elements does not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea (MPEP 2106.05(f)).
Step 2B:
The additional elements “one or more computer readable storage media” to store computer program instructions and “one or more computer processors” to execute the computer program instructions are no more than generic, off-the-shelf computer components, and the Symantec, TLI, OIP Techs, and Versata court decisions cited in MPEP 2106.05(d)(II) indicate that mere collection/receipt of data over a network and/or storing and retrieving information in memory are well-understood, routine, and conventional functions when it is claimed in a merely generic manner (See MPEP 2106.05(d)(II)(IV)). As such, claims 1, 6, and 11 are not patent eligible.
Dependent claims 2-5, 7-10, and 12-15:
Step 1:
Claims 2-5 are drawn to “a method”, 7-10 are drawn to “a system” therefore each of these claims falls under one of four categories of statutory subject matter (process/method, machines/products/apparatus, manufactures, and compositions of matter).
Steps 2A-2B:
Dependent claims 2-5, and 7-10 are also ineligible for the same reasons given with respect to claims 1, and 6. Claims 2-5, and 7-10 recite further abstract ideas and mathematical concept of subdividing the superset into a second plurality of subsets, training a second plurality of weak algorithms on each of a corresponding of the second plurality of subsets, and aggregating the first and second plurality of weak algorithms into a strong algorithm (MPEP 2106.04(a)(2)(I)). Claims 2-5, and 7-10 fail to recite any additional elements/steps that might integrates the abstract idea into a practical application. As such, claims 2-5, and 7-10 are not patent eligible.
In addition, claim 6 do not fall within at least one of the four categories of patent eligible subject matter because the claims do not include at least one hardware element in the bodies as required by MPEP 2106(I). Claim 6 is directed to a system claim and recites a datastore and runtime server, however, since the specification (see publication [0012] of instant application) does not limit the datastore and runtime server to be only hardware broadly interpreted file system and services also encompass software.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-2, 6-7, and 11-12 are rejected under 35 U.S.C. 103 as being unpatentable over Sakamoto et al (2021/0303937) in views of Dai et al (CN 108093406), Koo et al (2024/0211599) and Eloul et al (2023/0229786).
For claim 1, Sakamoto teaches a computerized method of model overfitting reduction within computing environment (abstract, lines 1-2), the method comprising: receiving an algorithm and a plurality of data sets within a secure computing node (Sakamoto teaches one or more computer processers generating a plurality of hyperparameter sets, wherein each hyperparameter set in the plurality of hyperparameter sets contains one or more hyperparameters varied to increase over-training in one or more models as Sakamoto teaches in par.05, lines 2-5); aggregating the plurality of data sets into a superset (Sakamoto teaches aggregating multiple predictions from the ensemble of weak models which generate a plurality of hyperparameter sets, wherein each hyperparameter set in the plurality of hyperparameter sets contains one or more hyperparameters varied to increase over-training in one or more models as Sakamoto teaches in par.20, lines 1-5); subdividing the superset into a first plurality of subsets (Sakamoto teaches create a plurality of weak models utilizing a created bootstrap dataset in a plurality of created bootstrap datasets, a corresponding extracted explanatory variable set, and a corresponding hyperparameter set in the generated plurality of hyperparameter sets, wherein each weak model in a created plurality of weak models shares at least the created bootstrap dataset as Sakamoto teaches in par.20, lines 5-12); training a first plurality of weak algorithms on each of a corresponding of the first plurality of subsets (Sakamoto teaches Program 150 created and trained 375 weak SVMs with 3 (i.e., M) bootstrap datasets (e.g., shuffled every trial) where each bootstrap dataset contains 5 (i.e., N) extracted explanatory variable sets where each explanatory variable set is associated with 25 (i.e., K) hyperparameter sets as Sakamoto teaches in par.29, lines 5-9).
Sakamoto fails to teach a zero-trust computing and aggregating the first plurality of weak algorithms into a first strong algorithm, generating a gold standard inversion model using the superset; dividing the superset into a sensitive subset of data and a non-sensitive subset of data; generating an actual risk in terms of inputs needed to generate an inference from the sensitive subset of data within a given time threshold using the gold standard inversion model on the first strong algorithm; and improving the exfiltration performance of the first strong algorithm by adding noise to the superset when the actual risk is above a risk threshold.
Koo teaches, similar system, a zero-trust computing (par.75) and dividing the superset into a sensitive subset of data and a non-sensitive subset of data (Koo teaches that of dividing or making different groups that each has sensitive data and another group has non-sensitive data as Koo teaches in par.100). It would have been obvious to one ordinary skill in the art before effective filling date to modify Sakamoto to include analysis in a zero-trust computing and dividing data as taught and suggested by Koo for the purpose of obtaining a sensitivity score for the asset based on the input feature and the type of the asset; obtaining, based on the plurality of activities, a malicious score and a data loss score for the asset; determining a user level of a user; and initiating implementation of a first DLP policy for the user based on the user level, the malicious score, the data loss score, and the sensitivity score (Koo, abstract). Sakamoto, as modified by Koo, does not explicitly teach aggregating the first plurality of weak algorithms into a strong algorithm, generating a gold standard inversion model using the superset; generating an actual risk in terms of inputs needed to generate an inference from the sensitive subset of data within a given time threshold using the gold standard inversion model on the first strong algorithm; and improving the exfiltration performance of the first strong algorithm by adding noise to the superset when the actual risk is above a risk threshold.
Furthermore, Dai teaches, similar system, aggregating the first plurality of weak algorithms into a strong algorithm (Dai teaches that using the Adaboost integrated learning algorithm to combine the trained weak classifier to form a strong classifier, then using the trained strong classifier to test the actual data as Dai teaches in abstract). It would have been obvious to one ordinary skill in the art before effective filling date to modify Sakamoto, as modified by Koo, to include analysis in a zero-trust computing as taught and suggested by Dai for the purpose of improving the accuracy of intrusion attack detection in the wireless sensor network, reduce the cost of a certain mark sample, reduce intrusion detection and training time, enhance the reliability of intrusion detection system (Dai, abstract). Sakamoto, as modified by Koo and Dai, does not explicitly teach generating a gold standard inversion model using the superset; generating an actual risk in terms of inputs needed to generate an inference from the sensitive subset of data within a given time threshold using the gold standard inversion model on the first strong algorithm; and improving the exfiltration performance of the first strong algorithm by adding noise to the superset when the actual risk is above a risk threshold.
Eloul teaches, similar system, generating a gold standard inversion model using the superset (Eloul teaches that the model parameters updated by each client actually contain information on the data, and the model can be inverted back to the data itself, via a relatively simple numerical procedure as Eloul teaches in par.33 and 40); generating an actual risk in terms of inputs needed to generate an inference from the sensitive subset of data within a given time threshold using the gold standard inversion model on the first strong algorithm (Eloul teaches that a policy service may use statistical correlation, direct inversion, etc. to compare the input data to the output weights or their gradients. In one embodiment, the comparison may use a simple correlation threshold, and with inversion tests of the parameters, this can be used by recovery information of tested data with using any metric related to measure information recovery of the inversion results from the parameters as Eloul teaches in par.9 and 48); and improving the exfiltration performance of the first strong algorithm by adding noise to the superset when the actual risk is above a risk threshold (Eloul teaches that Policy service 124 may further inject noise or other data to the output parameters to further obscure the original local data and if the federated machine learning model does not pass testing, (e.g., if the recovery rate of the inversion of parameters is found above a certain threshold), in step 220, the local computer program may reject the federated machine learning model as Eloul teaches in par.41 and 46). It would have been obvious to one ordinary skill in the art before effective filling date to modify Sakamoto, as modified by Koo and Dai, to include injecting noise into the data set as taught and suggested by Eloul for the purpose determining the rate of successful attacks in a given model and will help to develop policy for federated learning in order to validate or tune models against inversion of client data (Eloul, par.7).
For claims 2, 7 and 12, Sakamoto, combined with Koo, Dai and Eloul, further teaches subdividing the superset into a second plurality of subsets (Sakamoto teaches of program 150 creates a plurality of bootstrap datasets each containing 120 training statements from a training dataset that contains 1200 training statements as Sakamoto discloses in par.24, lines 5-10); training a second plurality of weak algorithms on each of a corresponding of the second plurality of subsets (Sakamoto teaches program 150 partitions training data, associated information, and vectors, contained within corpus 124, into multiple training, testing, and validation sets. In this embodiment, program 150, dependent on the utilized training method (supervised vs unsupervised), classifies, pairs, associates, and/or links said sets with one or more labels or output vectors as Sakamoto discloses in par.24); and aggregating the first and second plurality algorithm (Sakamoto teaches program 150 inputs the testing data into each model in a set of models and utilizes the aggregated (i.e., integrated) prediction for each training statement in the testing data to formulate an aggregated prediction for the entire model set. In another embodiment, program 150 utilizes a calculated average of the predictions outputted by the set of models to determine an aggregate decision or majority of the tested set of models as Sakamoto discloses in par.31).
Sakamoto fails to teach From weak algorithms into a strong algorithm.
Dai further teaches aggregating the plurality algorithm of weak algorithms into a strong algorithm (Dai teaches that using the Adaboost integrated learning algorithm to combine the trained weak classifier to form a strong classifier, then using the trained strong classifier to test the actual data as Dai teaches in abstract). It would have been obvious to one ordinary skill in the art before effective filling date to modify Sakamoto to include analysis in a zero-trust computing as taught and suggested by Dai for the purpose of improving the accuracy of intrusion attack detection in the wireless sensor network, reduce the cost of a certain mark sample, reduce intrusion detection and training time, enhance the reliability of intrusion detection system (Dai, abstract).
For claim 6, Sakamoto teaches computerized system of model overfitting reduction (abstract, lines 1-2) comprising: a datastore for receiving an algorithm and a plurality of data sets within a secure computing node (Sakamoto teaches one or more computer processers generating a plurality of hyperparameter sets, wherein each hyperparameter set in the plurality of hyperparameter sets contains one or more hyperparameters varied to increase over-training in one or more models as Sakamoto teaches in par.05, lines 2-5), aggregating the plurality of data sets into a superset (Sakamoto teaches aggregating multiple predictions from the ensemble of weak models which generate a plurality of hyperparameter sets, wherein each hyperparameter set in the plurality of hyperparameter sets contains one or more hyperparameters varied to increase over-training in one or more models as Sakamoto teaches in par.20, lines 1-5), and subdividing the superset into a first plurality of subsets (Sakamoto teaches create a plurality of weak models utilizing a created bootstrap dataset in a plurality of created bootstrap datasets, a corresponding extracted explanatory variable set, and a corresponding hyperparameter set in the generated plurality of hyperparameter sets, wherein each weak model in a created plurality of weak models shares at least the created bootstrap dataset as Sakamoto teaches in par.20, lines 5-12); and a runtime server for training a first plurality of weak algorithms on each of a corresponding of the first plurality of subsets (Sakamoto teaches Program 150 created and trained 375 weak SVMs with 3 (i.e., M) bootstrap datasets (e.g., shuffled every trial) where each bootstrap dataset contains 5 (i.e., N) extracted explanatory variable sets where each explanatory variable set is associated with 25 (i.e., K) hyperparameter sets as Sakamoto teaches in par.29, lines 5-9).
Sakamoto fails to teach a zero-trust computing and aggregating the first plurality of weak algorithms into a first strong algorithm, generating a gold standard inversion model using the superset; dividing the superset into a sensitive subset of data and a non-sensitive subset of data; generating an actual risk in terms of inputs needed to generate an inference from the sensitive subset of data within a given time threshold using the gold standard inversion model on the first strong algorithm; and improving the exfiltration performance of the first strong algorithm by adding noise to the superset when the actual risk is above a risk threshold.
Koo teaches, similar system, a zero-trust computing (par.75) and dividing the superset into a sensitive subset of data and a non-sensitive subset of data (Koo teaches that of dividing or making different groups that each has sensitive data and another group has non-sensitive data as Koo teaches in par.100). It would have been obvious to one ordinary skill in the art before effective filling date to modify Sakamoto to include analysis in a zero-trust computing and dividing data as taught and suggested by Koo for the purpose of obtaining a sensitivity score for the asset based on the input feature and the type of the asset; obtaining, based on the plurality of activities, a malicious score and a data loss score for the asset; determining a user level of a user; and initiating implementation of a first DLP policy for the user based on the user level, the malicious score, the data loss score, and the sensitivity score (Koo, abstract). Sakamoto, as modified by Koo, does not explicitly teach aggregating the first plurality of weak algorithms into a strong algorithm, generating a gold standard inversion model using the superset; generating an actual risk in terms of inputs needed to generate an inference from the sensitive subset of data within a given time threshold using the gold standard inversion model on the first strong algorithm; and improving the exfiltration performance of the first strong algorithm by adding noise to the superset when the actual risk is above a risk threshold.
Furthermore, Dai teaches, similar system, aggregating the first plurality of weak algorithms into a strong algorithm (Dai teaches that using the Adaboost integrated learning algorithm to combine the trained weak classifier to form a strong classifier, then using the trained strong classifier to test the actual data as Dai teaches in abstract). It would have been obvious to one ordinary skill in the art before effective filling date to modify Sakamoto, as modified by Koo, to include analysis in a zero-trust computing as taught and suggested by Dai for the purpose of improving the accuracy of intrusion attack detection in the wireless sensor network, reduce the cost of a certain mark sample, reduce intrusion detection and training time, enhance the reliability of intrusion detection system (Dai, abstract). Sakamoto, as modified by Koo and Dai, does not explicitly teach generating a gold standard inversion model using the superset; generating an actual risk in terms of inputs needed to generate an inference from the sensitive subset of data within a given time threshold using the gold standard inversion model on the first strong algorithm; and improving the exfiltration performance of the first strong algorithm by adding noise to the superset when the actual risk is above a risk threshold.
Eloul teaches, similar system, generating a gold standard inversion model using the superset (Eloul teaches that the model parameters updated by each client actually contain information on the data, and the model can be inverted back to the data itself, via a relatively simple numerical procedure as Eloul teaches in par.33 and 40); generating an actual risk in terms of inputs needed to generate an inference from the sensitive subset of data within a given time threshold using the gold standard inversion model on the first strong algorithm (Eloul teaches that a policy service may use statistical correlation, direct inversion, etc. to compare the input data to the output weights or their gradients. In one embodiment, the comparison may use a simple correlation threshold, and with inversion tests of the parameters, this can be used by recovery information of tested data with using any metric related to measure information recovery of the inversion results from the parameters as Eloul teaches in par.9 and 48); and improving the exfiltration performance of the first strong algorithm by adding noise to the superset when the actual risk is above a risk threshold (Eloul teaches that Policy service 124 may further inject noise or other data to the output parameters to further obscure the original local data and if the federated machine learning model does not pass testing, (e.g., if the recovery rate of the inversion of parameters is found above a certain threshold), in step 220, the local computer program may reject the federated machine learning model as Eloul teaches in par.41 and 46). It would have been obvious to one ordinary skill in the art before effective filling date to modify Sakamoto, as modified by Koo and Dai, to include injecting noise into the data set as taught and suggested by Eloul for the purpose determining the rate of successful attacks in a given model and will help to develop policy for federated learning in order to validate or tune models against inversion of client data (Eloul, par.7).
For claim 11, Sakamoto teaches A computer program product stored on a non-transitory computer readable medium which when executed computer system performs (abstract, lines 1-2) the steps of: receiving an algorithm and a plurality of data sets within a secure computing node (Sakamoto teaches one or more computer processers generating a plurality of hyperparameter sets, wherein each hyperparameter set in the plurality of hyperparameter sets contains one or more hyperparameters varied to increase over-training in one or more models as Sakamoto teaches in par.05, lines 2-5); aggregating the plurality of data sets into a superset (Sakamoto teaches aggregating multiple predictions from the ensemble of weak models which generate a plurality of hyperparameter sets, wherein each hyperparameter set in the plurality of hyperparameter sets contains one or more hyperparameters varied to increase over-training in one or more models as Sakamoto teaches in par.20, lines 1-5); subdividing the superset into a first plurality of subsets (Sakamoto teaches create a plurality of weak models utilizing a created bootstrap dataset in a plurality of created bootstrap datasets, a corresponding extracted explanatory variable set, and a corresponding hyperparameter set in the generated plurality of hyperparameter sets, wherein each weak model in a created plurality of weak models shares at least the created bootstrap dataset as Sakamoto teaches in par.20, lines 5-12); training a first plurality of weak algorithms on each of a corresponding of the first plurality of subsets (Sakamoto teaches Program 150 created and trained 375 weak SVMs with 3 (i.e., M) bootstrap datasets (e.g., shuffled every trial) where each bootstrap dataset contains 5 (i.e., N) extracted explanatory variable sets where each explanatory variable set is associated with 25 (i.e., K) hyperparameter sets as Sakamoto teaches in par.29, lines 5-9).
Sakamoto fails to teach a zero-trust computing and aggregating the first plurality of weak algorithms into a first strong algorithm, generating a gold standard inversion model using the superset; dividing the superset into a sensitive subset of data and a non-sensitive subset of data; generating an actual risk in terms of inputs needed to generate an inference from the sensitive subset of data within a given time threshold using the gold standard inversion model on the first strong algorithm; and improving the exfiltration performance of the first strong algorithm by adding noise to the superset when the actual risk is above a risk threshold.
Koo teaches, similar system, a zero-trust computing (par.75) and dividing the superset into a sensitive subset of data and a non-sensitive subset of data (Koo teaches that of dividing or making different groups that each has sensitive data and another group has non-sensitive data as Koo teaches in par.100). It would have been obvious to one ordinary skill in the art before effective filling date to modify Sakamoto to include analysis in a zero-trust computing and dividing data as taught and suggested by Koo for the purpose of obtaining a sensitivity score for the asset based on the input feature and the type of the asset; obtaining, based on the plurality of activities, a malicious score and a data loss score for the asset; determining a user level of a user; and initiating implementation of a first DLP policy for the user based on the user level, the malicious score, the data loss score, and the sensitivity score (Koo, abstract). Sakamoto, as modified by Koo, does not explicitly teach aggregating the first plurality of weak algorithms into a strong algorithm, generating a gold standard inversion model using the superset; generating an actual risk in terms of inputs needed to generate an inference from the sensitive subset of data within a given time threshold using the gold standard inversion model on the first strong algorithm; and improving the exfiltration performance of the first strong algorithm by adding noise to the superset when the actual risk is above a risk threshold.
Furthermore, Dai teaches, similar system, aggregating the first plurality of weak algorithms into a strong algorithm (Dai teaches that using the Adaboost integrated learning algorithm to combine the trained weak classifier to form a strong classifier, then using the trained strong classifier to test the actual data as Dai teaches in abstract). It would have been obvious to one ordinary skill in the art before effective filling date to modify Sakamoto, as modified by Koo, to include analysis in a zero-trust computing as taught and suggested by Dai for the purpose of improving the accuracy of intrusion attack detection in the wireless sensor network, reduce the cost of a certain mark sample, reduce intrusion detection and training time, enhance the reliability of intrusion detection system (Dai, abstract). Sakamoto, as modified by Koo and Dai, does not explicitly teach generating a gold standard inversion model using the superset; generating an actual risk in terms of inputs needed to generate an inference from the sensitive subset of data within a given time threshold using the gold standard inversion model on the first strong algorithm; and improving the exfiltration performance of the first strong algorithm by adding noise to the superset when the actual risk is above a risk threshold.
Eloul teaches, similar system, generating a gold standard inversion model using the superset (Eloul teaches that the model parameters updated by each client actually contain information on the data, and the model can be inverted back to the data itself, via a relatively simple numerical procedure as Eloul teaches in par.33 and 40); generating an actual risk in terms of inputs needed to generate an inference from the sensitive subset of data within a given time threshold using the gold standard inversion model on the first strong algorithm (Eloul teaches that a policy service may use statistical correlation, direct inversion, etc. to compare the input data to the output weights or their gradients. In one embodiment, the comparison may use a simple correlation threshold, and with inversion tests of the parameters, this can be used by recovery information of tested data with using any metric related to measure information recovery of the inversion results from the parameters as Eloul teaches in par.9 and 48); and improving the exfiltration performance of the first strong algorithm by adding noise to the superset when the actual risk is above a risk threshold (Eloul teaches that Policy service 124 may further inject noise or other data to the output parameters to further obscure the original local data and if the federated machine learning model does not pass testing, (e.g., if the recovery rate of the inversion of parameters is found above a certain threshold), in step 220, the local computer program may reject the federated machine learning model as Eloul teaches in par.41 and 46). It would have been obvious to one ordinary skill in the art before effective filling date to modify Sakamoto, as modified by Koo and Dai, to include injecting noise into the data set as taught and suggested by Eloul for the purpose determining the rate of successful attacks in a given model and will help to develop policy for federated learning in order to validate or tune models against inversion of client data (Eloul, par.7).
Claim(s) 3-5, 8-10 and 13-15 are rejected under 35 U.S.C. 103 as being unpatentable over Sakamoto et al (2021/0303937) in views of Dai et al (CN 108093406) Koo et al (2024/0211599) and Eloul et al (2023/0229786) as applied to claims above, and further in view of Langworthy et al (2024/0184885).
For claims 3, 8, and 13, Sakamoto, as modified by with Koo, Dai and Eloul, teaches all the limitation as previously set forth except for training an inversion model on the strong algorithm using the superset to tune the inversion model; characterizing the performance of the inversion model; and comparing the inversion model performance against a threshold.
Langworthy teaches, similar system, training an inversion model on the strong algorithm using the superset to tune the inversion model (Langworthy teaches that the set of training data used to train the model or the features used as inputs to the model, the candidate classification models may each vary in their respective level of susceptibility to model inversion attacks, may be performed in respect of each candidate classification model in order to determine each candidate model's level of susceptibility as Langworthy teaches in par.7, lines 4-7); characterizing the performance of the inversion model as Langworthy teaches in par.36, lines 11-15); characterizing the performance of the inversion model (Langworthy teaches multiple evaluations of the model are performed using a number of different sample sizes, the measure that is provided by operation 250 may be a data series, indicating the susceptibility of the model to attack for each of a plurality of possible sample sizes as Langworthy teaches in par.32, lines 6-9); and comparing the inversion model performance against a threshold (Langworthy teaches predetermined parameters defining an acceptable level of susceptibility for the model to inversion attacks at each of one or more sample sizes and may assess whether those predetermined thresholds are met by the model. That is to say, the model may compare the property that was used to evaluate the replica model at operation 240 for each of the indicated sample sizes and determine whether the value of the property exceeds the predetermined acceptable level of susceptibility for that sample size as Langworthy teaches in par.33, lines 2-7). It would have been obvious to one ordinary skill in the art before effective filling date to modify Sakamoto, as modified by with Koo, Dai and Eloul, to include training an inversion model as taught and suggested by Langworthy for the purpose of determining a measure of susceptibility of a machine-learned classification model to model inversion attacks and measure of the level of susceptibility of the classification model to model inversion attacks based on the number of samples in the sample data used to train the replica of the classification model and the property of the replica of the classification model (Langworthy, abstract).
For claims 4, 9, and 14, Sakamoto, as modified by with Koo, Dai and Eloul, teaches all the limitation as previously set forth and further teaches repeatedly subdividing the superset into a plurality of a plurality of subsets (Sakamoto teaches in par.24); generating a weak algorithm for each subset to form a plurality of weak algorithms (Sakamoto teaches in par.20); repeatedly aggregating a subset of the plurality of weak algorithms (Sakamoto teaches in par.20).
Sakamoto, as modified by with Koo, Dai and Eloul, fails to teach plurality of weak algorithms to generate a plurality of strong algorithms and generating an inversion model for each of the plurality of strong algorithms; calculating the performance for each of the plurality of inversion models; and selecting the strong algorithm corresponding to the inversion model with the lowest performance.
Dai further teaches aggregating the plurality algorithm of weak algorithms into a strong algorithm (Dai teaches that using the Adaboost integrated learning algorithm to combine the trained weak classifier to form a strong classifier, then using the trained strong classifier to test the actual data as Dai teaches in abstract). It would have been obvious to one ordinary skill in the art before effective filling date to modify Sakamoto, as modified by with Koo, and Eloul, to include analysis in a zero-trust computing as taught and suggested by Dai for the purpose of improving the accuracy of intrusion attack detection in the wireless sensor network, reduce the cost of a certain mark sample, reduce intrusion detection and training time, enhance the reliability of intrusion detection system (Dai, abstract).
Langworthy further teaches, similar system, generating an inversion model for each of the plurality of strong algorithms; calculating the performance for each of the plurality of inversion models; and selecting the strong algorithm corresponding to the inversion model with the lowest performance (Langworthy teaches predetermined parameters defining an acceptable level of susceptibility for the model to inversion attacks at each of one or more sample sizes and may assess whether those predetermined thresholds are met by the model. That is to say, the model may compare the property that was used to evaluate the replica model at operation 240 for each of the indicated sample sizes and determine whether the value of the property exceeds the predetermined acceptable level of susceptibility for that sample size as Langworthy teaches in par.33, lines 2-7). It would have been obvious to one ordinary skill in the art before effective filling date to modify Sakamoto, as modified by with Koo, Dai and Eloul, to include training an inversion model as taught and suggested by Langworthy for the purpose of determining a measure of susceptibility of a machine-learned classification model to model inversion attacks and measure of the level of susceptibility of the classification model to model inversion attacks based on the number of samples in the sample data used to train the replica of the classification model and the property of the replica of the classification model (Langworthy, abstract).
For claims 5, 10, and 15, Sakamoto, as modified by with Koo, Dai and Eloul, teaches all the limitation as previously set forth and further teaches outputting weights for the selected algorithm (Sakamoto teaches in par.30, lines 5-10).
Sakamoto, as modified by with Koo, Dai and Eloul, fails to teach strong algorithm.
Dai further teaches strong algorithm (Dai teaches that using the Adaboost integrated learning algorithm to combine the trained weak classifier to form a strong classifier, then using the trained strong classifier to test the actual data as Dai teaches in abstract). It would have been obvious to one ordinary skill in the art before effective filling date to modify Sakamoto to include analysis in a zero-trust computing as taught and suggested by Dai for the purpose of improving the accuracy of intrusion attack detection in the wireless sensor network, reduce the cost of a certain mark sample, reduce intrusion detection and training time, enhance the reliability of intrusion detection system (Dai, abstract).
Response to Amendments/Arguments
Applicant’s arguments with respect to claim(s) 1-15 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
With respect to applicant argument of 101 rejection that the current amendment overcome the 101 rejections is acknowledged, however, the examiner respectfully disagrees with applicant because claims still do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The newly amended language still directed to steps of generating standard, classifying and improving i.e. also categorized as mental steps and/or human activity. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of computerized system, a data store and server are recited at a high level of generality and are generic computer components such that they amount to no more than mere instructions to apply the exception using a generic computer. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Thus, the claimed elements, either individually, or in the ordered combination do not add significantly more to the abstract idea.
Additionally, the 35 U.S.C.101 rejection of claim 6 for being directed to software perse has been maintained as there is no amendment and/.or argument to overcome the rejection. Examiner suggests amending the claim to recite at least one hardware structural element to independent claim 6. However, the 35 U.S.C.101 rejection of claim 11 is withdrawn due to the claim amendment.
The applicant’s arguments regarding new amendment limitations in claims 1, 6 and 11, has been considered but is moot, because the examiner applied new art, Koo et al (2024/0211599) and Eloul et al (2023/0229786), that covers newly claimed limitation.
Regarding dependent claims arguments, said arguments are moot because the applied references are not considered to have alleged differences, and therefore are considered to properly show that for which they were cited.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AYUB A MAYE whose telephone number is (571)270-5037. The examiner can normally be reached Monday-Friday 9AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SHEWAYE GELAGAY can be reached at 571-272-4219. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AYUB A MAYE/Examiner, Art Unit 2436 /SHEWAYE GELAGAY/Supervisory Patent Examiner, Art Unit 2436