DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness . Claims 1- 8 are rejected under 35 U.S.C. 103 as being unpatentable over Fradkin et al. US 2023/0325678 further in view of Tamira et al. US 2018/0211039. In regarding to claim 1 Fradkin teaches: 1. A deep neural network system, comprising: a neural network operation unit configured to perform a convolution operation on input features and generate a classification result; [0007] In an aspect, the system further includes a data protector module comprising interpretable neural network models trained to learn prototypes for explaining class prediction, form class predictions of initial training data relying on geometry of latent space , wherein the class predictions determine how a test input is similar to prototypical parts of inputs from each class , and detect potential data poisoning or backdoor triggers in the initial training data on a condition that prototypical parts from unrelated class es are activated. [0028] In FIG. 5 , the embodiments shown in FIGS. 3 and 4 are combined. In an embodiment, during the training stage of operation, dynamic ensemble 125 receives clean training data 155 as provided from data protector 121 . Once all of the individual ML models of are trained, the deployed makeup for the dynamic ensemble 125 is determined by the alertness score 343 and/or user provided system constraints 305 . The configured dynamic ensemble 125 operates in an inference stage to evaluate input data according to the learning objectives established during training (e.g., an ML model trained to class ify input images during training stage will then class ify input images fed to the ML model during the inference stage). In order to defend against the aforementioned various attack threats, such as cyber physical attack 331 at sensor suite 311 inputs, digital attack 332 , data poison attack 333 , and backdoor attack 334 , a multi-faceted unified defense system of data protector 121 and attack detector 123 is arranged to monitor all data during both the training stage and the inference stage of dynamic ensemble 125 to detect any such attacks . Dynamic ensemble 125 is capable of dynamically adapting its size and composition based on a control function that reacts to the alertness score 343 received from the attack detector 123 . This enables good performance even under resource constraints while addressing robustness versus costs trade-offs. The higher the alertness score, the higher the need for a robust result. In normal operation, however, the alertness is expected to be low, thus ensuring good on-average performance even under limited computational resources. Dynamic ensemble 125 also enables leverage of contextual information (multiple sensors and modalities, domain knowledge, spatio -temporal constraints) and user needs 305 (e.g., learning objectives, domain constraints, class -specific mis class ification costs, or limits on computation resources) to make explicit robustness-resources trade-offs. Behaviors of interpretable models can be verified by an expert user via user interface 130 , allowing detection of problems with training data and/or features, troubleshooting of the model at training time or enabling verification at inference time for low-velocity high-stakes applications. In general, data augmentation 342 expands the training data set with examples obtained under different transformations. Perturbations and robust optimization can be used to defend against adversarial attacks. An approach using randomized smoothing can be used to increase robustness of ML models with respect to L2 attacks. Many, though not all, existing attacks are not stable with respect to scale and orientation or rely on quirks in the models that are affected by irrelevant parts of the input. Thus, another potential defense is to combine predictions of a ML model made across multiple transformations of the input such as rescaling, rotation, resampling, noise, background removal and by nonlinear embeddings of inputs . Fradkin , 0007, 0028 and 0048 , emphasis added. a memory unit configured to store a trained neural network model in a first storage and configured to provide a first parameter to the neural network operation unit based on the trained neural network model; [ 0054] Dynamic ensemble of robust models—Control of the dynamic ensemble 125 involves dynamically adjusting the size and type of ensemble (e.g., the number of individual ML models, and the combination of various types of ML models to be deployed during inference stage of operation) based on access to correlated signals such as the alertness score from attack detector 123 as well as other available contextual and user specified parameter s. For example, user specified parameter s 305 may include learning objectives, and domain constraints (e.g., limits on computational resources). The inherent trade-off is between maintaining the accuracy of prediction (absent adversary) and robustness (stability in the presence of adversarial perturbation). Additional trade-offs exist with respect to computational limitations such as available computing resources or limits on time to make a prediction. A system objective is to adjust the ensemble, both in terms of its size and type, to select a desirable point along the operating curve . The loss of accuracy due to the ensemble relative to the benign setting can be directly evaluated empirically by forming the ensemble. Robustness guarantees associated with a specific ensemble can also be calculated. As a result, a dynamic control of the ensemble maintains a desirable operating point. Specifically, dynamic control either maximizes accuracy for a given choice of robustness or maximizes robustness subject to an accuracy (loss) constraint. [ 0065] The computer system 610 may perform a portion or all of the processing steps of embodiments of the invention in response to the processors 620 executing one or more sequences of one or more instructions contained in a memory, such as the system memory 630 . Such instructions may be read into t he system memory 630 from another computer readable medium of storage 640 , such as the magnetic hard disk 641 or the removable media drive 642 . The magnetic hard disk 641 and/or removable media drive 642 may contain one or more data stores and data files used by embodiments of the present disclosure. The data store 640 may include, but are not limited to, databases (e.g., relational, object-oriented, etc.), file systems, flat files, distributed data stores in which data is stored on more than one node of a computer network, peer-to-peer network data stores, or the like. Data store contents and data files may be encrypted to improve security. The processors 620 may also be employed in a multi-processing arrangement to execute the one or more sequences of instructions contained in system memory 630 . In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. Thus, embodiments are not limited to any specific combination of hardware circuitry and software. Fradkin , 0054, 0065, emphasis added an attack detection circuit configured to generate a trigger signal periodically and/or when a hostile attack on the memory unit is detected; [ 0037] Data Protector Interpretable Models— Data protector 121 includes interpretable neural network models used for processing the initial training data 151 to detect data poisoning or backdoor triggers. Case-based reasoning techniques for interpretable neural network models rely on the geometry of the latent space to make predictions, which naturally encourages neighboring instances to be conceptually similar. These reasoning techniques also consider only the most important parts of inputs and provide information about how each of those parts is similar to other concepts from the class. In particular, the neural network determines how a test input is similar to prototypical parts of inputs from each class and uses this information to form a class prediction. The interpretable neural networks tend to lose little to no classification accuracy compared against black box counterparts but are much harder to train. Fradkin , 0037-0039, emphasis added and a protection logic unit configured to detect whether or not the first parameter provided to the neural network operation unit from the memory unit has been tampered with in response to the trigger signal, [ 0037] Data Protector Interpretable Models— Data protector 121 includes interpretable neural network models used for processing the initial training data 151 to detect data poisoning or backdoor triggers. Case-based reasoning techniques for interpretable neural network models rely on the geometry of the latent space to make predictions, which naturally encourages neighboring instances to be conceptually similar. These reasoning techniques also consider only the most important parts of inputs and provide information about how each of those parts is similar to other concepts from the class. In particular, the neural network determines how a test input is similar to prototypical parts of inputs from each class and uses this information to form a class prediction. The interpretable neural networks tend to lose little to no classification accuracy compared against black box counterparts but are much harder to train. Fradkin , 0037-0039, 0061-0065, emphasis added. however, Fradkin fails to explicitly teach, but Tamir teaches: and configured to provide a second parameter to the neural network operation unit according to the detection result, the second parameter backed up in a second storage. [0010] A system for protecting a database against a ransomware attack includes a processor and memory. A ransomware detection and remediation application, stored in the memory and executed by the processor, is configured to: retrieve and store database backup data associated with a database to a storage device; monitor the database to detect data changes to the database resulting from a ransomware attack; and in response to the ransomware attack, restore data in the database to a point prior to the ransomware attack based upon the backup data in the storage device . [0011] In other features, the ransomware detection and remediation application is further configured to receive database events from the database, and apply a plurality of rules to the database events to detect the ransomware attack. The ransomware detection and remediation application is further configured to detect the ransomware attack based on the database events using at least one of deep learning analysis detection and machine learning . Tamir, 0010-0011, 0039, emphasis added. Accordingly, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Tamir with the system of Fradkin in order configured to provide a second parameter to the neural network operation unit according to the detection result, the second parameter backed up in a second storage, as such, a ransomware remediator communicates with the ransomware detector and the database backup handler and is configured to restore data in the database to a point prior to the ransomware attack based upon the backup data in the storage device..---0005. Note: The motivation that was applied to claim 1 above, applies equally as well to claims 2- 8 as presented blow. In regarding to claim 2 Fradkin and Tamir teaches: 2. The system of claim 1, furthermore, Fradkin teaches: wherein the memory unit comprises a cache memory configured to transmit the trained neural network model stored in the first storage to the neural network operation unit. Fradkin , 0022 In regarding to claim 3 Fradkin and Tamir teaches: 3. The system of claim 2, furthermore, Fradkin teaches: wherein the attack detection circuit is configured to monitor at least one of a row-hammering attack on the cache memory, an error occurrence in the first storage, and/or an irregularity in an access pattern to the cache memory or the first storage. Fradkin , 0022 In regarding to claim 4 Fradkin and Tamir teaches: 4. The system of claim 2, furthermore, Tamir teaches: wherein the protection logic unit comprises a comparator/updater unit that is configured to perform a comparison of the first parameter and the second parameter loaded in the cache memory. Tamir, 0039 -0040 In regarding to claim 5 Fradkin and Tamir teaches: 5. The system of claim 4, furthermore, Tamir teaches: wherein the comparator/updater unit is configured to transmit the second parameter instead of the first parameter to the neural network operation unit when the comparator/updater unit determines from the comparison that the first parameter is inconsistent with the second parameter. Tamir, 0039-0040 In regarding to claim 6 Fradkin and Tamir teaches: 6. The system of claim 1, furthermore, Tamir teaches: comprising: a backup management unit configured to back up the trained neural network model to the second storage, Tamir, 0039-0040 wherein the backup management unit comprises: an analyzer configured to analyze parameters of the trained neural network model that are sensitive or vulnerable to bit-flips or errors; Tamir, 0039-0040 and a segregation unit configured to separate vulnerable parameters or sensitive parameters from the trained neural network model according to a result of the analyzer and configured to back up the vulnerable parameters or the sensitive parameters up to the second storage. Tamir, 0039-0040 In regarding to claim 7 Fradkin and Tamir teaches: 7. The system of claim 6, furthermore, Fradkin teaches: wherein the backup management unit comprises an encryption unit configured to encrypt the vulnerable parameters or the sensitive parameters. Fradkin , 0022 In regarding to claim 8 Fradkin and Tamir teaches: 8. The system of claim 1, furthermore, Fradkin teaches: wherein the deep neural network system is included in an object recognition system of an autonomous vehicle. Fradkin , 0002, 0047 Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: Claims 9-14 are rejected under 35 U.S.C. 103 as being unpatentable over Tamira et al. US 2018/0211039 further in view of Fradkin et al. US 2023/0325678. In regarding to claim 9 Tamir teaches: 9. An operation method of a deep neural network system, comprising: backing up a trained neural network model stored in a first storage to a second storage; [0010] A system for protecting a database against a ransomware attack includes a processor and memory. A ransomware detection and remediation application, stored in the memory and executed by the processor, is configured to: retrieve and store database backup data associated with a database to a storage device; monitor the database to detect data changes to the database resulting from a ransomware attack; and in response to the ransomware attack, restore data in the database to a point prior to the ransomware attack based upon the backup data in the storage device . [0017] In other features, the method includes detecting the ransomware attack and generating the ransomware alert based on the database events using at least one of deep learning analysis detection and machine learning. The database backup data includes row changes made to the database and the method further comprises, in response to the ransomware alert, changing data in the database based upon the database backup data . Tamir, 0010, 0015-0017 , emphasis added. transferring data from the first storage to a cache memory to transfer a first parameter to a neural network operator; [0045] In other examples shown in FIG. 4C, the ransomware detector 210 includes a rules filter 550 . The rules filter 550 includes one or more rules that are used to filter database data or database event data and to identify ransomware based upon changes in frequency, timing, file types, user profiles, packet data such as source, destination, etc., file extensions, and/or information, etc. For example, the rules filter 550 may look for changes to honeypot database data and/or changes to file extensions that are indicators of a ransomware attack, although other rules may be used . Tamir, 0010, 0015-0017, 0056-0057, emphasis added. comparing the first parameter with a second parameter that is a backed-up value of the first parameter from the second storage; [0017] In other features, the method includes detecting the ransomware attack and generating the ransomware alert based on the database events using at least one of deep learning analysis detection and machine learning. The database backup data includes row changes made to the database and the method further comprises, in response to the ransomware alert, changing data in the database based upon the database backup data . Tamir, 0010, 0015-0017, 0056-0057, emphasis added. and updating the neural network operator with the second parameter when the first parameter and the second parameter do not match. [0017] In other features, the method includes detecting the ransomware attack and generating the ransomware alert based on the database events using at least one of deep learning analysis detection and machine learning. The database backup data includes row changes made to the database and the method further comprises, in response to the ransomware alert, changing data in the database based upon the database backup data . Tamir, 0010, 0015-0017, 0056-0057, emphasis added. However, Tamir fails to explicitly teach, but Fradkin teach a cache memory. Fradkin , 0059 Accordingly, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Fradkin with the system of Tamir in order to include a cache memory, as such, data can be processed faster. Note: The motivation that was applied to claim 9 above, applies equally as well to claims 10-14 as presented blow. In regarding to claim 10 Tamir and Fradkin teaches: 10. The method of claim 9, furthermore, Tamir teaches: comprising: comparing the first parameter with the second parameter in response to detecting an adversarial attack against the first storage or cache memory. Tamir, 0010, 0015-0017, 0056-0057 . In regarding to claim 11 Tamir and Fradkin teaches: 11. The method of claim 9, furthermore, Tamir teaches: wherein the backing up of the trained neural network model to the second storage comprises: classifying the trained neural network model into a plurality of parameter groups according to vulnerability or sensitivity to bit-flip; and selecting at least one group from among the plurality of parameter groups and backing the selected at least one group up to the second storage. Tamir, 0010, 0015-0017, 0056-0057 . In regarding to claim 12 Tamir and Fradkin teaches: 12. The method of claim 11, furthermore, Tamir teaches: wherein the plurality of parameter groups are divided into parameters corresponding to a structure, weight, bias, and layer of the trained neural network model. Tamir, 0010, 0015-0017, 0056-0057 . In regarding to claim 13 Tamir and Fradkin teaches: 13. The method of claim 11, furthermore, Fradkin teaches: comprising: encrypting the selected at least one group. Fradkin , 0065. In regarding to claim 14 Tamir and Fradkin teaches: 14. The method of claim 13, furthermore, Tamir teaches: wherein the comparing the first parameter with the second parameter includes decoding the second parameter. Tamir, 0010, 0015-0017, 0056-0057 . In regarding to claim 1 5 Fradkin teaches: 15. A deep neural network system configured to perform object recognition operations, comprising: a neural network operation unit configured to classify an input image through neural network operation; Fradkin , 0007, 0028 and 0048 a first storage configured to store a trained neural network model; Fradkin , 0007, 0028 and 0048 a cache memory configured to transfer the trained neural network model stored in the first storage to the neural network operator; Fradkin , 0007, 0028, 0048 and 0059 however, Fradkin fails to explicitly teach, but Tamir teaches: a second storage configured to store a backed up trained neural network model; Tamir, 0010, 0015-0017, 0056-0057 Accordingly, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Tamir with the system of Fradkin in order a second storage configured to store a backed up trained neural network model , as such, a ransomware remediator communicates with the ransomware detector and the database backup handler and is configured to restore data in the database to a point prior to the ransomware attack based upon the backup data in the storage device..---0005. Furthermore, Fradkin teaches: an attack detection circuit configured to generate a trigger signal upon detection of an adversarial attack against the first storage or the cache memory; Fradkin , 0007, 0028 and 0048 Furthermore, Tamir teaches: a nd a comparator/updater configured to perform a comparison of data in the cache memory [s ee cache memory rejection under Fradkin ] with data in the second storage in response to the trigger signal and configured to update the neural network operator with the backed up trained neural network model according to a result of the comparison. Tamir, 0010, 0015-0017, 0056-0057 Note: The motivation that was applied to claim 15 above, applies equally as well to claims 16-20 as presented blow. In regarding to claim 16 Fradkin and Tamir teaches: 16. The system of claim 15, Furthermore, Fradkin teaches: wherein the attack detection circuit is configured to generate the trigger signal at a predetermined period in addition to upon detection of the adversarial attack against the first storage or the cache memory. Fradkin , 0007, 0028 and 0048 In regarding to claim 17 Fradkin and Tamir teaches: 17. The system of claim 15, Furthermore, Tamir teaches: wherein the deep neural network system comprises a backup management unit configured to back up the trained neural network model to the second storage. Tamir, 0010, 0015-0017, 0056-0057 In regarding to claim 18 Fradkin and Tamir teaches: 18. The system of claim 17, Furthermore, Tamir teaches: wherein the backup management unit comprises: a sensitivity analyzer configured to analyze sensitive parameters of the trained neural network model according to bit-flip or error; and a segregation unit configured to select for backing up sensitive parameters to the second storage according to a result of the sensitivity analyzer. Tamir, 0010, 0015-0017, 0056-0057 In regarding to claim 19 Fradkin and Tamir teaches: 19. The system of claim 18, Furthermore, Tamir teaches: wherein the backup management unit includes an encryptor configured to encrypt the sensitive parameter selected in the segregation unit and provide the encrypted sensitive parameter to the second storage. Tamir, 0010, 0015-0017, 0056-0057 In regarding to claim 20 Fradkin and Tamir teaches: 20. The system of claim 19, Furthermore, Fradkin teaches: wherein the neural network operation unit includes at least one convolution operation core. Fradkin , 0048 Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT DANIEL T TEKLE whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)270-1117 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT Monday-Friday 8:00-4:30 ET . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT William Vaughn can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT 571-272-3922 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DANIEL T TEKLE/ Primary Examiner, Art Unit 2481