Prosecution Insights
Last updated: April 19, 2026
Application No. 18/354,784

METHOD AND SYSTEM FOR ADVERSARIAL MALWARE THREAT PREVENTION AND ADVERSARIAL SAMPLE GENERATION

Non-Final OA §103
Filed
Jul 19, 2023
Examiner
KIM, EUI H
Art Unit
2453
Tech Center
2400 — Computer Networks
Assignee
UNIVERSITY OF GUELPH
OA Round
3 (Non-Final)
49%
Grant Probability
Moderate
3-4
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 49% of resolved cases
49%
Career Allow Rate
76 granted / 156 resolved
-9.3% vs TC avg
Strong +53% interview lift
Without
With
+52.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
28 currently pending
Career history
184
Total Applications
across all art units

Statute-Specific Performance

§101
10.5%
-29.5% vs TC avg
§103
65.9%
+25.9% vs TC avg
§102
10.4%
-29.6% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 156 resolved cases

Office Action

§103
DETAILED ACTION This office action is in response to the RCE filed on 10/15/2025. Claim 19 remains cancelled. Claims 1, 5, 8, 11, and 14 are amended. Claims 1-18, 20 are presented for examination. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/15/2025 has been entered. Response to Arguments Applicant’s arguments with respect to the claims regarding 35 USC 103 rejections filed on 10/15/2025 on pg. 6-12 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant further argues in essence: [a] “Accordingly, Applicant respectfully submits that independent claims 1 and 8 are patentably distinct over any combination of Abusnaina and Yuste.” Pg. 9 of Remarks. In response to [a], claims 1 and 8 are no longer reliant on Yuste and therefore the arguments on pg 6-9 directed towards claims 1 and 8 no longer apply. [b] “Applicant further submits that the combination of Abusnaina and Yuste also does not teach or suggest the subject matter of independent claim 14. In particular, Abusnaina and Yuste do not teach at least the claimed subject matter of training the model using generated payloads based on benign samples, and inserting the generated payloads into at least one of the code caves of the executable malicious sample and changing the labelling of the executable sample to benign. Abusnaina and Yuste do not teach this subject matter for at least the reasons described above.” Pg. 9 of Remarks. In response to [b], while new references are used for the concept of “generated payloads based on benign samples” and “changing the labelling of the executable sample to benign”, Yuste is still relied upon for the concept of inserting generated payloads back into the executable malicious sample. Firstly, Abusnaina is not relied upon for the rejection of claim 14. Secondly, Claim 14 does not perform the step of training, and only generates the executable samples that would be used for training. Regarding the steps of “using generated payloads based on benign samples, and inserting the generated payloads into at least one of the code caves of the executable malicious sample and changing the labelling of the executable sample to benign”, while Yuste is still relied upon for the inserting generated payloads, the idea of using generated payloads based on benign samples and changing the labeling of the samples is rejected using different references, Smith and Chat respectively, reflected in the rejection below. Yuste discloses inserting the generated payloads into at least one of the code caves of the executable malicious sample (Yuste: pg. 6 left column “When a solution needs to be evaluated, it is necessary to write the bytes which are currently in the solution, back into their corresponding unused blocks of the PE file. Then, the new PE file is provided as an input to the DL model to evaluate the solution (step 4). The quality of the solution is measured by the output of the DL model. Note that this is a value ranging from 0 to 1, where values close to 1 indicate malicious, while values close to 0 indicate benign.” The solution, i.e. the payload, is inserted back into the malicious PE file, the sample,). As seen above, Yuste pg. 6 left column shows the insertion of the generated solution back into the PE file to evaluate if the malicious sample is still detected as malicious or benign. Therefore, while the idea of using generated payloads based on benign samples and changing the labeling of the samples is rejected using different references, Smith and Chat respectively, Yuste is still relied upon for the concept of inserting the payload back into the malicious PE file. [c] “Applicant submits that Radnisky does not teach or suggest at least the subject matter of claims 1, 8, and 14 that Abusnaina and Yuste lack. Radnisky discloses a computer-implemented system for real-time classification of unknown entities, particularly files for malware detection, using one or more engines (see abstract of Radnisky). At paragraph [0096], Radnisky describes inspecting an unknown file's properties and selecting the most suitable anti-malware classification engine or subset of engines. The approach described in Radnisky of selecting a most appropriate engine and using such engine for classification does not describe an adversarial training approach for such engines, and certainly does not teach the currently claimed subject matter of inserting generated payloads into code caves and changing the labelling of such samples to benign for iterative training of the model.” Pg. 9-10 of Remarks. In response to [c], examiner respectfully disagrees. While Radinksy is not relied upon for “the currently claimed subject matter of inserting generated payloads into code caves and changing the labelling of such samples to benign for iterative training of the model”, it is still relied upon to teach the concept of “determining a binary classifier” as claimed in Claims 1 and 8, i.e. selection/determination of which binary classifier should be used. Radinsky discloses determining a binary classifier (Radinsky: para.0096 “FIG. 7 illustrates a computer-implemented engine selection method in accordance with the disclosed architecture. At 700, file properties of an unknown file are inspected. At 702, one or more candidate anti-malware classification engines are selected from a set of classification engines based on the file properties. At 704, the unknown file is processed using the one or more candidate anti-malware classification engines to output classification information for each of the candidate anti-malware classification engines. At 706, the unknown file is classified based on a single output or multiple outputs of the one or more of the candidate anti-malware classification engines.” Based on properties of the file to be inspected for malware, one or more classifiers are selected for classifying the file.). Therefore examiner maintains rejection in view of Radinksky for the step of determining a binary classifier in the rejection of claims 1 and 8. [d] Cazan, Gray, Kong, Chen, and Barker references fail to teach or suggest the subject matter of claims 1, 8, and 14 that Abusnaina and Yuste lack. Pg.10-11 of Remarks. In response to [d], Cazan, Gray, Kong, Chen, and Barker references are not relied upon for the rejection of claims 1, 8 or 14, therefore this argument does not apply. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. This application includes one or more claim limitations that use the word “means” or “step” but are nonetheless not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph because the claim limitation(s) recite(s) sufficient structure, materials, or acts to entirely perform the recited function. Such claim limitation(s) is/are: “8. A system for adversarial malware threat prevention, the system comprising one or more processors and a data storage, the data storage comprising instructions for the one or more processors to execute: an input module to receive an input executable sample; an adversarial prevention module to extract features of the input executable sample and apply feature mapping to determine one or more components of the features, and to determine a binary classifier representing whether the executable sample is adversarial using one or more machine learning models, …and an output module to output the input executable sample unless the binary classifier indicates adversarial.” In claim 8. “wherein the input module receives a further input executable sample and, where the input executable sample was determined to be adversarial, the adversarial prevention module compares one or more of the adversarial characteristics of the input executable sample to those of the further input executable sample to determine if the further input executable sample is adversarial.” In claim 11. While the above limitations contain language that align with 112(f) invocation, claim 8 further recites sufficient structure for each of the modules “the system comprising one or more processors and a data storage, the data storage comprising instructions for the one or more processors to execute:… an input module to receive… an adversarial prevention module to extract… to determine… an output module to output” that states each of the modules are executed by the processor of the data storage unit. Because this/these claim limitation(s) is/are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are not being interpreted to cover only the corresponding structure, material, or acts described in the specification as performing the claimed function, and equivalents thereof. If applicant intends to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to remove the structure, materials, or acts that performs the claimed function; or (2) present a sufficient showing that the claim limitation(s) does/do not recite sufficient structure, materials, or acts to perform the claimed function. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4, 8-10 are rejected under 35 U.S.C. 103 as being unpatentable over Abusnaina et al. (hereinafter Abusnaina, US 2023/0289441 A1) in view of Smith et al. (hereinafter Smith, US 12,019,746 B1) in view of Chatterjee et al. (hereinafter Chat, US 2022/0179840 A1) in view of Radinsky et al. (hereinafter Radinsky, US 2012/0084859 A1). Regarding Claim 1, Abusaina discloses A computer-executed method for adversarial malware threat prevention, the method comprising (Abusaina: para.0008): receiving an input executable sample (Abusaina: para.0026 “In some embodiments, feature extraction unit 220 receives the byte reset software 33 from byte resetting unit 214” the software 33 is obtained); extracting features of the input executable sample and applying feature mapping to determine one or more components of the features (Abusaina: para.0032 “In some embodiments, after component-based feature extractor 223 generates the component-based features 35, graph construction unit 232 of graphical encoding unit 230 receives the component-based features 35 and commences the process of graphically encoding the component-based features 35.” para.0027 “In some embodiments, in order to extract the component-based features from the byte reset software 33, section extractor 224 of component-based feature extractor 223 receives the byte reset software 33 and commences the process of partitioning the byte reset software 33 into distinct sections or components. In some embodiments, section extractor 224 is configured to partition byte reset software 33 into the distinct plurality of sections such that each section is independent from another section and may be represented using a plurality of component-based feature representations (described further below with reference to feature extractor 225). … An example of distinct sections (e.g., 81-82) partitioned by section extractor 224 is illustrated in FIG. 2B. In some embodiments, each section is provided by section extractor 224 to feature extractor 225 for component-based feature extraction.” Para.0032 “In some embodiments, the graph 36 generated by the graph construction unit 232 is a graph that includes a plurality of isolated subgraphs (e.g., subgraph 61, subgraph 62, subgraph 63, subgraph 64, subgraph 65 of FIG. 2B), with each subgraph 61-65 representing a component in the software 30 and each feature representation of the component (e.g., component-based feature) is a node in the subgraph.” Fig. 2B 61-65. In Fig. 2A the feature extraction unit 220 and the graphical encoding unit together extract features from the software and maps the features to generate a set of components, such as in Fig. 2B. Examiner notes that the features and components of abusnaina are components and features respectively in view of claim language.); a binary classifier representing whether the input executable sample is adversarial using one or more machine learning models (Abusaina: para.0038 “In some embodiments, malware detection unit 260 is software configured to generate a malware indicator 42 that indicates whether the received vector 40 (and thus the corresponding portion of software 30) is malicious or not malicious. In some embodiments, malware detection unit 260 may utilize a tree learning structure, such as, for example, a Light Gradient Boosting Machine (LightGBM) tree learning structure, to detect whether the received vector 40 (that includes the vector and graphical representation of the component-based features 35 and monotonic features 39) is malicious or not malicious.” Using machine learning model, the malware detection unit determines whether or not the sample is malware or not, i.e. adversarial.), the one or more machine learning models taking the one or more components as input (Abusaina: para.0033 “In some embodiments, graph encoder 234 uses the graph attention network to encode the nodes representing different representations of the components and translate the graph into a vector representation.” Para.0037 “In some embodiments, vectorization unit 290 is configured to combine the monotonic features 39 with the graphical encoding output 38 into a vector 40. In some embodiments, vectorization unit 290 provides the vector 40 to malware detection unit 260.” Para.0038 “In some embodiments, malware detection unit 260 may utilize a tree learning structure, such as, for example, a Light Gradient Boosting Machine (LightGBM) tree learning structure, to detect whether the received vector 40 (that includes the vector and graphical representation of the component-based features 35 and monotonic features 39) is malicious or not malicious.” The features of the component are converted to vector form, and fed to the malware detection unit, and uses the vectorized components to detect if that portion of the software is malicious.), where the binary classifier indicates adversarial, dropping the input executable sample, otherwise outputting the input executable sample (Abusnaina: para.0038-0039 “In some embodiments, malware detection unit 260 is software configured to generate a malware indicator 42 that indicates whether the received vector 40 (and thus the corresponding portion of software 30) is malicious or not malicious….In some embodiments, malware removal unit 270 receives the malware indicator 42 and, when the malware indicator 42 indicates that the software is malicious, removes or nullifies the malicious software. In some embodiments, when the malware indicator 42 indicates that the software is not malicious, the non-malicious software is not removed or nullified.” The malware detection unit 260 and the malware removal unit 270 together make up the binary classifier, and if malware is detected, i.e. adversarial, the software is removed, otherwise the software is not removed, i.e. outputted.). However Abusaina does not explicitly disclose determining a binary classifier; the one or more machine learning models are iteratively trained using, at least, generated adversarial training samples, wherein, during each iteration: the adversarial training sample for such iteration is generated by: determining a code cave in a malicious sample, generating a payload based on a benign sample, inserting the generated payload at the determined code cave, and changing a labeling of the adversarial training sample to benign, and training the model using the adversarial training sample. Smith discloses the one or more machine learning models are iteratively trained using, at least, generated adversarial training samples (Smith: Col. 3 lines 12-37 “the systems apply a random sample contraction that iteratively and automatically reduces the search set for the alterations that yield the sought after operational malware and/or their variants that are prone to a misclassification. The optimization calculations continue recursively until operational malware prone to misclassifications and any associated synthesized operational malware, variants, and/or its family members are identified or substantially identified in the sample. The calculations are thereafter repeated automatically, continuously and/or periodically so that the entire synthesized malware sample set is screened and an update of the malware and/or its attributes is written to a memory 804 to ensure that the machine learning algorithms 820 can train on these forms and/or precisely track and/or identify the operational malware instances, variants, its family members, and/or etc. that are likely to be misclassified when traditional malware detection systems are exposed to them” the machine learning models are periodically trained on newly generated synthesized adversarial training samples that are generated in an iterative manner.), wherein, during each iteration: the adversarial training sample for such iteration is generated by: determining a code cave in a malicious sample (Smith: col. 10 lines 18-30 “FIG. 6 is a block diagram restructuring code by code caving. This alteration engine's technique of injecting code avoids malware detection by dynamically introducing unused blocks of code 604 into the original malware executables 602. The placement of code within blocks preserves the malware's functionality while exploiting traditional malware detector's vulnerability to misclassify the modified code as benign. While an exemplary code placement shown in italics and underlined is inserted in an intermediate portion or mid-section of original malware code, in other applications optimization techniques place code and/or data in other positions of the original malware code.” Code cave is identified in Fig. 6, within malicious sections of code to inject benign samples, such that the malware is still functional, but would be misclassified as benign.), generating a payload based on a benign sample (Smith: Fig. 6 col. 9 lines 4-7 “In FIG. 3, the malicious code is obfuscated by appending a large benign section of code and/or random and/or predetermined data to the malicious code.” col. 10 lines 18-30 “FIG. 6 is a block diagram restructuring code by code caving….This alteration engine's technique of injecting code avoids malware detection by dynamically introducing unused blocks of code 604 into the original malware executables 602.” A payload is generated that is known to be benign, or predetermined benign data, that is to be appended or injected into the code cave.), inserting the generated payload at the determined code cave (Smith: Fig. 6 col. 9 lines 4-7 “In FIG. 3, the malicious code is obfuscated by appending a large benign section of code and/or random and/or predetermined data to the malicious code.” col. 10 lines 18-30 “This alteration engine's technique of injecting code avoids malware detection by dynamically introducing unused blocks of code 604 into the original malware executables 602.” The benign code or predetermined benign code is injected into the code cave.), and training the model using the adversarial training sample (Smith: col. 6 lines 39-45 “When the synthesized malware candidates 828 and/or variant candidates are classified as benign and confirmed as operational, a target classifier engine 110 generates a vulnerability report 112, the malware profiles 818 are made available and the operational synthesized malware 822 are made available for training or used to generate malware training data.” Col. 10 lines 47-54 “While traditional machine learning models do not provide a strong defense against an attack intentionally designed to cause misclassifications, training machine learning algorithms 820 on synthesized malware samples and/or variants and/or comparing input sampled attributed to information stored in malware profiles 818 minimize and/or prevent malware misclassifications.” Col. 3 lines 12-37 “the systems apply a random sample contraction that iteratively and automatically reduces the search set for the alterations that yield the sought after operational malware and/or their variants that are prone to a misclassification. The optimization calculations continue recursively until operational malware prone to misclassifications and any associated synthesized operational malware, variants, and/or its family members are identified or substantially identified in the sample. The calculations are thereafter repeated automatically, continuously and/or periodically so that the entire synthesized malware sample set is screened and an update of the malware and/or its attributes is written to a memory 804 to ensure that the machine learning algorithms 820 can train on these forms and/or precisely track and/or identify the operational malware instances, variants, its family members, and/or etc. that are likely to be misclassified when traditional malware detection systems are exposed to them” The machine learning models are then trained on the synthesized malware samples. The process repeats periodically and the synthesized malware sample is updated such that the model is able to be trained on the synthesized malware sample.). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to combine Abusnaina with Smith in order to incorporate the one or more machine learning models are iteratively trained using, at least, generated adversarial training samples, wherein, during each iteration: the adversarial training sample for such iteration is generated by: determining a code cave in a malicious sample, generating a payload based on a benign sample, inserting the generated payload at the determined code cave, and training the model using the adversarial training sample. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of improving the training and accuracy of malware detection (Smith: col. 4 lines 13-50). However Abusnaina-Smith does not explicitly disclose determining a binary classifier; and changing a labeling of the adversarial training sample to benign. Chat discloses changing a labeling of the adversarial training sample to benign (Chat: para.0005 “The embodiments may let the adversarial process determine whether each data sample (e.g., observed label) in the untrusted dataset may include a true label or may be reassigned with an estimated true label determined based on the trusted labels in the trusted dataset, the features of the data, and the observed label. … In some embodiments, the updated dataset with one or more reassigned labels for untrusted dataset and/or trusted dataset in the trusted dataset may be used to train the machine learning model” para.0075 “At step 506, the bias correction computing device 102 may update observed labels in the untrusted dataset based at least in part on trusted labels in the trusted dataset and injecting noise in the untrusted dataset. For example, adversarial algorithm 310 may inject noise in the untrusted dataset 308 based on trusted labels in the trusted dataset 306 to generate updated training data 316. Adversarial algorithm 310 may inject noise into the untrusted dataset 308 to reassign labels to the data samples in the untrusted dataset.” Noise may be introduced to untrusted dataset, i.e. adversarial training sample, to reassign the label to be trusted, i.e. benign.). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Abusaina-Smith with that of Chat in order to incorporate changing a labeling of the adversarial training sample to benign. Chat shows the process of adding data to untrusted samples in order to flip a label of the sample, similar to that of Smith that injects bits into code caves until a sample flips from adversarial to benign, and it would be obvious to apply this process of Chat such that the new samples that used to be adversarial are then labeled as benign. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of improved accuracy of the machine learning model (Chat: para.0005). However Abusaina-Smith-Chat does not explicitly disclose determining a binary classifier. Radinsky discloses determining a binary classifier (Radinsky: para.0096 “FIG. 7 illustrates a computer-implemented engine selection method in accordance with the disclosed architecture. At 700, file properties of an unknown file are inspected. At 702, one or more candidate anti-malware classification engines are selected from a set of classification engines based on the file properties. At 704, the unknown file is processed using the one or more candidate anti-malware classification engines to output classification information for each of the candidate anti-malware classification engines. At 706, the unknown file is classified based on a single output or multiple outputs of the one or more of the candidate anti-malware classification engines.” Based on properties of the file to be inspected for malware, one or more classifiers are selected for classifying the file.). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to combine Abusnaina-Smith-Chat with Radinsky in order to incorporate determining a binary classifier. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of improved accuracy of malware detection by using the most accurate models based on file type (Radinsky: para.0001-0003). Regarding Claim 2, Abusnaina-Smith-Chat-Radinsky discloses claim 1 as set forth above. However Abusnaina does not explicitly disclose wherein the binary classifier classifies the executable sample where a probability of being adversarial is above a defined threshold. Smith discloses wherein the binary classifier classifies the executable sample where a probability of being adversarial is above a defined threshold (Smith: col.7 lines 23-25 “When the malware classification score is above a predetermined detection threshold, the surrogate model classifies the file or file-less malware as malicious” the classification score being above a threshold classifies the malware as malicious). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to combine Abusnaina with Smith in order to incorporate wherein the binary classifier classifies the executable sample where a probability of being adversarial is above a defined threshold. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of improving the training and accuracy of malware detection (Smith: col. 4 lines 13-50). Regarding Claim 3, Abusnaina-Smith-Chat-Radinsky discloses claim 2 as set forth above. However Abusnaina-Smith-Chat-Radinsky does not explicitly disclose wherein the one or more machine learning models comprises a stack of machine learning models and wherein the probability of being adversarial is an aggregated probability of the outputs of the machine learning models in the stack of machine learning models. Radinsky discloses wherein the one or more machine learning models comprises a stack of machine learning models (Radkinsky: para.0025 “Here, the selection component 114 selects two candidate expensive classification engines (from the multiple different engines 104): a first expensive engine 124 and a third expensive engine 126 each of which provides some level of expertise in processing and identifying the unknown entity 110.” A plurality of models are selected.) and wherein the probability of being adversarial is an aggregated probability of the outputs of the machine learning models in the stack of machine learning models (Radinsky: para.0026 “Where multiple candidate expensive engines 118 are ultimately selected, an aggregation component 128 can be employed to process the corresponding outputs of classification information 122 and output an overall detection output 130 for the unknown entity 110.” Para.0086 “The weighted score S for the combined anti-malware engines is:… where a.sub.i is the predicted accuracy of engine 1, and I represents the indicator function where it is assigned the value one if the lth anti-malware engine detects that the unknown file is malware. The denominator ensures that the value of S is less than or equal to one. A file is determined to be malware by this method if: [[S is greater than or equal to T]]” The outputs of each model are aggregated to produce a single output 130 in Fig. 1 by weighing each result to generated an aggregated probability S, and when S is greater than T, it it determined to be malware.). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to combine Abusnaina-Smith-Chat with Radinsky in order to incorporate wherein the one or more machine learning models comprises a stack of machine learning models and wherein the probability of being adversarial is an aggregated probability of the outputs of the machine learning models in the stack of machine learning models. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of improved accuracy of malware detection by using the most accurate models based on file type (Radinsky: para.0001-0003). Regarding Claim 4, Abusnaina-Smith-Chat-Radinsky discloses claim 2 as set forth above. However Abusnaina-Smith does not explicitly disclose wherein the defined threshold is a probability greater than 0.5. Chat discloses wherein the defined threshold is a probability greater than 0.5 (Chat: para.0068 “A confidence of the adversarial algorithm 310 in determining true labels of the data samples in the untrusted dataset 308 can be determined for each data sample in untrusted dataset 308 by confidence determiner 312 using eq. 14 above. Denoising engine 314 may recover true labels of the data samples in the untrusted dataset 308 based on the confidence corresponding to the data sample being above or below predetermined threshold (e.g., 0.5), using eq. 15.” A predetermined threshold of 0.5 may be used to determine if a data sample is untrusted or not). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Abusaina-Smith with that of Chat in order to incorporate wherein the defined threshold is a probability greater than 0.5. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of improved accuracy of the machine learning model (Chat: para.0005). Regarding Claim 8, Abusnaina discloses A system for adversarial malware threat prevention, the system comprising one or more processors and a data storage, the data storage comprising instructions for the one or more processors to execute (Abusnaina: para.0008-0011): an input module to receive an input executable sample (Abusaina: para.0026 “In some embodiments, feature extraction unit 220 receives the byte reset software 33 from byte resetting unit 214” the software 33 is obtained via the feature extraction unit.); an adversarial prevention module to extract features of the input executable sample and apply feature mapping to determine one or more components of the features (Abusaina: para.0032 “In some embodiments, after component-based feature extractor 223 generates the component-based features 35, graph construction unit 232 of graphical encoding unit 230 receives the component-based features 35 and commences the process of graphically encoding the component-based features 35.” para.0027 “In some embodiments, in order to extract the component-based features from the byte reset software 33, section extractor 224 of component-based feature extractor 223 receives the byte reset software 33 and commences the process of partitioning the byte reset software 33 into distinct sections or components. In some embodiments, section extractor 224 is configured to partition byte reset software 33 into the distinct plurality of sections such that each section is independent from another section and may be represented using a plurality of component-based feature representations (described further below with reference to feature extractor 225). … An example of distinct sections (e.g., 81-82) partitioned by section extractor 224 is illustrated in FIG. 2B. In some embodiments, each section is provided by section extractor 224 to feature extractor 225 for component-based feature extraction.” Para.0032 “In some embodiments, the graph 36 generated by the graph construction unit 232 is a graph that includes a plurality of isolated subgraphs (e.g., subgraph 61, subgraph 62, subgraph 63, subgraph 64, subgraph 65 of FIG. 2B), with each subgraph 61-65 representing a component in the software 30 and each feature representation of the component (e.g., component-based feature) is a node in the subgraph.” Fig. 2B 61-65. In Fig. 2A the feature extraction unit 220 and the graphical encoding unit together extract features from the software and maps the features to generate a set of components, such as in Fig. 2B. Examiner notes that the features and components of abusnaina are components and features respectively in view of claim language.), and a binary classifier representing whether the executable sample is adversarial using one or more machine learning models (Abusaina: para.0038 “In some embodiments, malware detection unit 260 is software configured to generate a malware indicator 42 that indicates whether the received vector 40 (and thus the corresponding portion of software 30) is malicious or not malicious. In some embodiments, malware detection unit 260 may utilize a tree learning structure, such as, for example, a Light Gradient Boosting Machine (LightGBM) tree learning structure, to detect whether the received vector 40 (that includes the vector and graphical representation of the component-based features 35 and monotonic features 39) is malicious or not malicious.” Using machine learning model, the malware detection unit determines whether or not the sample is malware or not, i.e. adversarial.), the one or more machine learning models taking the one or more components as input (Abusaina: para.0033 “In some embodiments, graph encoder 234 uses the graph attention network to encode the nodes representing different representations of the components and translate the graph into a vector representation.” Para.0037 “In some embodiments, vectorization unit 290 is configured to combine the monotonic features 39 with the graphical encoding output 38 into a vector 40. In some embodiments, vectorization unit 290 provides the vector 40 to malware detection unit 260.” Para.0038 “In some embodiments, malware detection unit 260 may utilize a tree learning structure, such as, for example, a Light Gradient Boosting Machine (LightGBM) tree learning structure, to detect whether the received vector 40 (that includes the vector and graphical representation of the component-based features 35 and monotonic features 39) is malicious or not malicious.” The features of the component are converted to vector form, and fed to the malware detection unit, and uses the vectorized components to detect if that portion of the software is malicious.), an output module to output the input executable sample unless the binary classifier indicates adversarial (Abusnaina: para.0038-0039 “In some embodiments, malware detection unit 260 is software configured to generate a malware indicator 42 that indicates whether the received vector 40 (and thus the corresponding portion of software 30) is malicious or not malicious….In some embodiments, malware removal unit 270 receives the malware indicator 42 and, when the malware indicator 42 indicates that the software is malicious, removes or nullifies the malicious software. In some embodiments, when the malware indicator 42 indicates that the software is not malicious, the non-malicious software is not removed or nullified.” The malware detection unit 260 and the malware removal unit 270 together make up the binary classifier, and if malware is detected, i.e. adversarial, the software is removed, otherwise the software is not removed, i.e. outputted.). However Abusnaina does not explicitly disclose to determine a binary classifier; the one or more machine learning models are iteratively trained using, at least, generated adversarial training samples, wherein during each iteration: the adversarial training sample for such iteration is generated by: determining a code cave in a malicious sample, generating a payload based on a benign sample, inserting the generated payload at the determined code cave, and changing a labeling of the adversarial training sample to benign, and training the model using the adversarial training sample. Smith discloses the one or more machine learning models are iteratively trained using, at least, generated adversarial training samples (Smith: Col. 3 lines 24-37 “The optimization calculations continue recursively until operational malware prone to misclassifications and any associated synthesized operational malware, variants, and/or its family members are identified or substantially identified in the sample. The calculations are thereafter repeated automatically, continuously and/or periodically so that the entire synthesized malware sample set is screened and an update of the malware and/or its attributes is written to a memory 804 to ensure that the machine learning algorithms 820 can train on these forms and/or precisely track and/or identify the operational malware instances, variants, its family members, and/or etc. that are likely to be misclassified when traditional malware detection systems are exposed to them” the machine learning models constantly trained on newly generated synthesized adversarial training samples), wherein, during each iteration: the adversarial training sample for such iteration is generated by: determining a code cave in a malicious sample (Smith: col. 10 lines 18-30 “FIG. 6 is a block diagram restructuring code by code caving. This alteration engine's technique of injecting code avoids malware detection by dynamically introducing unused blocks of code 604 into the original malware executables 602. The placement of code within blocks preserves the malware's functionality while exploiting traditional malware detector's vulnerability to misclassify the modified code as benign. While an exemplary code placement shown in italics and underlined is inserted in an intermediate portion or mid-section of original malware code, in other applications optimization techniques place code and/or data in other positions of the original malware code.” Code cave is identified in Fig. 6, within malicious sections of code to inject benign samples, such that the malware is still functional, but would be misclassified as benign.), generating a payload based on a benign sample (Smith: Fig. 6 col. 9 lines 4-7 “In FIG. 3, the malicious code is obfuscated by appending a large benign section of code and/or random and/or predetermined data to the malicious code.” col. 10 lines 18-30 “FIG. 6 is a block diagram restructuring code by code caving….This alteration engine's technique of injecting code avoids malware detection by dynamically introducing unused blocks of code 604 into the original malware executables 602.” A payload is generated that is known to be benign, or predetermined benign data, that is to be appended or injected into the code cave.), inserting the generated payload at the determined code cave (Smith: Fig. 6 col. 9 lines 4-7 “In FIG. 3, the malicious code is obfuscated by appending a large benign section of code and/or random and/or predetermined data to the malicious code.” col. 10 lines 18-30 “This alteration engine's technique of injecting code avoids malware detection by dynamically introducing unused blocks of code 604 into the original malware executables 602.” The benign code or predetermined benign code is injected into the code cave.), and training the model using the adversarial training sample (Smith: col. 6 lines 39-45 “When the synthesized malware candidates 828 and/or variant candidates are classified as benign and confirmed as operational, a target classifier engine 110 generates a vulnerability report 112, the malware profiles 818 are made available and the operational synthesized malware 822 are made available for training or used to generate malware training data.” Col. 10 lines 47-54 “While traditional machine learning models do not provide a strong defense against an attack intentionally designed to cause misclassifications, training machine learning algorithms 820 on synthesized malware samples and/or variants and/or comparing input sampled attributed to information stored in malware profiles 818 minimize and/or prevent malware misclassifications.” The machine learning models are then trained on the synthesized malware samples). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to combine Abusnaina with Smith in order to incorporate the one or more machine learning models are iteratively trained using, at least, generated adversarial training samples, wherein, during each iteration: the adversarial training sample for such iteration is generated by: determining a code cave in a malicious sample, generating a payload based on a benign sample, inserting the generated payload at the determined code cave, and training the model using the adversarial training sample. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of improving the training and accuracy of malware detection (Smith: col. 4 lines 13-50). However Abusnaina-Smith does not explicitly disclose to determine a binary classifier; and changing a labeling of the adversarial training sample to benign. Chat discloses changing a labeling of the adversarial training sample to benign (Chat: para.0005 “The embodiments may let the adversarial process determine whether each data sample (e.g., observed label) in the untrusted dataset may include a true label or may be reassigned with an estimated true label determined based on the trusted labels in the trusted dataset, the features of the data, and the observed label. … In some embodiments, the updated dataset with one or more reassigned labels for untrusted dataset and/or trusted dataset in the trusted dataset may be used to train the machine learning model” para.0075 “At step 506, the bias correction computing device 102 may update observed labels in the untrusted dataset based at least in part on trusted labels in the trusted dataset and injecting noise in the untrusted dataset. For example, adversarial algorithm 310 may inject noise in the untrusted dataset 308 based on trusted labels in the trusted dataset 306 to generate updated training data 316. Adversarial algorithm 310 may inject noise into the untrusted dataset 308 to reassign labels to the data samples in the untrusted dataset.” Noise may be introduced to untrusted dataset, i.e. adversarial training sample, to reassign the label to be trusted, i.e. benign.). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Abusaina-Smith with that of Chat in order to incorporate changing a labeling of the adversarial training sample to benign. Chat shows the process of adding data to untrusted samples in order to flip a label of the sample, similar to that of Smith that injects bits into code caves until a sample flips from adversarial to benign, and it would be obvious to apply this process of Chat such that the new samples that used to be adversarial are then labeled as benign. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of improved accuracy of the machine learning model (Chat: para.0005). However Abusaina-Smith-Chat does not explicitly disclose to determine a binary classifier Radinsky discloses determine a binary classifier (Radinsky: para.0096 “FIG. 7 illustrates a computer-implemented engine selection method in accordance with the disclosed architecture. At 700, file properties of an unknown file are inspected. At 702, one or more candidate anti-malware classification engines are selected from a set of classification engines based on the file properties. At 704, the unknown file is processed using the one or more candidate anti-malware classification engines to output classification information for each of the candidate anti-malware classification engines. At 706, the unknown file is classified based on a single output or multiple outputs of the one or more of the candidate anti-malware classification engines.” Based on properties of the file to be inspected for malware, one or more classifiers are selected for classifying the file.). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to combine Abusnaina-Smith-Chat with Radinsky in order to incorporate determine a binary classifier. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of improved accuracy of malware detection by using the most accurate models based on file type (Radinsky: para.0001-0003). Regarding Claim 9-10, they do not teach nor further define over the limitations of claims 2-3, therefore the supporting rationale for the rejection to claims 2-3 apply equally as well to that of claims 9-10. Claim(s) 5, 11 are rejected under 35 U.S.C. 103 as being unpatentable over Abusnaina et al. (hereinafter Abusnaina, US 2023/0289441 A1) in view of Smith et al. (hereinafter Smith, US 12,019,746 B1) in view of Chatterjee et al. (hereinafter Chat, US 2022/0179840 A1) in view of Radinsky et al. (hereinafter Radinsky, US 2012/0084859 A1) further in view of Cazan et al. (hereinafter Cazan, US 2020/0005082 A1). Regarding Claim 5, Abusnaina-Smith-Chat-Radinsky discloses claim 1 as set forth above. However Abusnaina-Smith-Chat-Radinsky does not explicitly disclose receiving a further input executable sample and, where the input executable sample was determined to be adversarial, comparing one or more adversarial characteristics of the input executable sample to those of the further input executable sample to determine if the further input executable sample is adversarial. Cazan discloses receiving a further input executable sample and, where the input executable sample was determined to be adversarial, comparing one or more adversarial characteristics of the input executable sample to those of the further input executable sample to determine if the further input executable sample is adversarial (Cazan: para.0021 “A secure fuzzy hashing algorithm is used to enable comparison between two or more binary datasets (e.g., binary files, binary data currently being downloaded, network traffic, etc.). FIG. 3 illustrates select components of an example system 300 configured to compare a received binary file to a known malware file, in order to classify the received file as malware or clean. The example system 300 includes a byte n-gram embedding model system 302, a network 304, and a binary data source 306. Byte n-gram embedding model system 302 includes one or more processors 308, a network interface 310, and memory 312. Binary data to be processed 314, feature extractor 112, trained neural network 102, a hash generator 316, a classifier 318, and a hash library 320 are maintained in the memory 312. Memory 312 may also store a generated hash 322.” System takes a known malware file, the first input executable sample determined to be adversarial, and compares it to a newly received binary file to determine if the new binary file is malware, adversarial, therefore these are adversarial characteristics). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to combine Abusnaina-Smith-Chat-Radinsky with Cazan in order to incorporate receiving a further input executable sample and, where the input executable sample was determining to the adversarial, comparing one or more adversarial characteristics of the input executable sample to those of the further input executable sample to determine if the further input executable sample is adversarial. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of improved malware detection using similar files (Cazan: para.0021, para.0003). Regarding Claim 11, Abusnaina-Smith-Chat-Radinsky discloses claim 8 as set forth above. However Abusnaina-Smith-Chat-Radinsky does not explicitly disclose wherein the input module receives a further input executable sample and, where the input executable sample was determined to be adversarial, the adversarial prevention module compares one or more adversarial characteristics of the input executable sample to those of the further input executable sample to determine if the further input executable sample is adversarial. Cazan discloses wherein the input module receives a further input executable sample and, where the input executable sample was determined to be adversarial, the adversarial prevention module compares one or more adversarial characteristics of the input executable sample to those of the further input executable sample to determine if the further input executable sample is adversarial (Cazan: para.0021 “A secure fuzzy hashing algorithm is used to enable comparison between two or more binary datasets (e.g., binary files, binary data currently being downloaded, network traffic, etc.). FIG. 3 illustrates select components of an example system 300 configured to compare a received binary file to a known malware file, in order to classify the received file as malware or clean. The example system 300 includes a byte n-gram embedding model system 302, a network 304, and a binary data source 306. Byte n-gram embedding model system 302 includes one or more processors 308, a network interface 310, and memory 312. Binary data to be processed 314, feature extractor 112, trained neural network 102, a hash generator 316, a classifier 318, and a hash library 320 are maintained in the memory 312. Memory 312 may also store a generated hash 322.” System takes a known malware file, the first input executable sample determined to be adversarial, and compares it to a newly received binary file to determine if the new binary file is malware, adversarial, therefore these are adversarial characteristics. The software of the system receives the input sample, and the software of the system performs the comparison, the input module and adversarial prevention module respectively.). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to combine Abusnaina-Smith-Chat-Radinsky with Cazan in order to incorporate wherein the input module receives a further input executable sample and, where the input executable sample was determining to the adversarial, the adversarial prevention module compares one or more adversarial characteristics of the input executable sample to those of the further input executable sample to determine if the further input executable sample is adversarial. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of improved malware detection using similar files (Cazan: para.0021, para.0003). Claim(s) 6-7, 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Abusnaina et al. (hereinafter Abusnaina, US 2023/0289441 A1) in view of Smith et al. (hereinafter Smith, US 12,019,746 B1) in view of Chatterjee et al. (hereinafter Chat, US 2022/0179840 A1) in view of Radinsky et al. (hereinafter Radinsky, US 2012/0084859 A1) further in view of Gray et al. (hereinafter Gray, US 2024/0070276 A1). Regarding Claim 6, Abusnaina-Smith-Chat-Radinsky discloses claim 1 as set forth above. However Abusnaina-Smith-Chat-Radinsky does not explicitly disclose wherein determining the one or more components of the features comprises scoring features and determining a most prominent feature, and using the one or more components of the most prominent feature. Gray discloses scoring features and determining a most prominent feature (Gray: para.0033-0036 “Based on the value of the combined risk score, priority calculator 108 may determine a scan priority for file 122. For example, when the value of the combined risk score is within a first range (e.g., between 10 and 15), the scan priority may be a first scan priority. The first scan priority may represent high risk of malware exposure. For example, file 122 may have a high risk of malware exposure when file 122 has not been scanned yet and is being launched as a process. As another example, file 122 may have a high risk when file 122 is not scanned yet and is being loaded by another process for which section sync is created. As another example, file 122 may have a high risk when file 122 has been modified by a process that is flagged as suspicious or malicious.” Using the risk score of the file, the scan priority is determined for each file to be tested for malware, each range indicating a high medium or low risk of malware exposure.), and using the most prominent feature (Gray: para.0038 “Malware scan queue 110 may store a list of files for malware scans by malware scanner 112. The list may be sorted by scan priority. When a file is to be scanned by malware scanner 112, such as file 122, the file may be removed from malware scan queue 110 to be scanned by malware scanner 112.” Para.0053 “Method 200 may further include determining a scan priority, at step 206. For example, priority calculator 108 may determine the scan priority based on the combined risk score. Method 200 may further include performing a malware scan based on the scan priority, at step 208. For example, malware scanner 112 may perform a malware scan on file 122 based on the scan priority.” The files, i.e. features, are tested in order of score.). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Abusnaina-Smith-Chat-Radinsky and Gray in order to incorporate scoring features and determining a most prominent feature and using the most prominent feature, and apply this idea to wherein determining the one or more components of the features, and the use of the components of the features as established in Abusnaina-Smith-Chat-Radinsky. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of reducing the impact of malware on the system (Gray: para.0008). Regarding Claim 7, Abusnaina-Smith-Chat-Radinsky discloses claim 1 as set forth above. However Abusnaina-Smith-Chat-Radinsky does not explicitly disclose wherein determining the one or more components of the features comprises performing principal component analysis. Gray discloses performing principal file analysis (Gray: para.0033-0036 “Based on the value of the combined risk score, priority calculator 108 may determine a scan priority for file 122. For example, when the value of the combined risk score is within a first range (e.g., between 10 and 15), the scan priority may be a first scan priority. The first scan priority may represent high risk of malware exposure. For example, file 122 may have a high risk of malware exposure when file 122 has not been scanned yet and is being launched as a process. As another example, file 122 may have a high risk when file 122 is not scanned yet and is being loaded by another process for which section sync is created. As another example, file 122 may have a high risk when file 122 has been modified by a process that is flagged as suspicious or malicious.” Using the risk score of the file, the scan priority is determined for each file to be tested for malware, each range indicating a high medium or low risk of malware exposure.). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Abusnaina-Smith-Chat-Radinsky and Gray in order to incorporate performing principal file analysis, and apply this idea to determining the one or more components of the features, and measure the importance of components. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of reducing the impact of malware on the system (Gray: para.0008). Regarding Claim 12-13, they do not teach nor further define over the limitations of claims 6-7, therefore the supporting rationale for the rejection to claims 6-7 apply equally as well to that of claims 12-13. Claim(s) 14-15 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Yuste et al. (hereinafter Yuste, “Optimization of Code Caves in malware binaries to evade machine learning detectors”, NPL FEB 2022 attached.) in view of Smith et al. (hereinafter Smith, US 12/019,746 B1) in view of Chatterjee et al. (hereinafter Chat, US 2022/0179840 A1). Regarding Claim 14, Yuste discloses A computer-executed method for adversarial sample generation, the method comprising (Yuste: pg. 2 left column “The main contribution of this paper is the proposal of a new method to derive AEs from existing malware binaries that were previously detected by a DL model. Furthermore, our method produces fully functional AEs in a reasonable amount of time. To evaluate the performance of our proposal, we tested the samples produced by our method against a state-of-the-art malware detector based on DL ( MalConv ) that was recently proposed ( Raff et al., 2018 ).” Method for generating a set of Adversarial example samples): Iteratively performing (Yuste: pg. 3 “The aforementioned method is summarized in Fig. 1 . First, we introduce a predefined unused space before each section in the PE file (step 1). Then, we test if the sample is detected by the target DL model (step 2). If not, the algorithm stops (step 7). Otherwise, we proceed to optimize the content of the introduced spaces (step 3). If the modified sample is not detected by the target DL after the optimization phase (step 4), and the size of the unused space is not larger than a predefined threshold which will be described in the following sections (step 5), we increase the size of the unused space (step 6) and we start a new iteration of the algorithm. This procedure is repeated until an AE is crafted (i.e., it evades the DL model) or the total size of the introduced space is larger than the predefined threshold.” The system iteratively generates the segments, determines code caves, generates the payloads, i.e. optimize the content step 3, and inserts them into the pe file, as seen in steps 10-11 in Fig. 4. Seen in Table 10 in Fig. 10, it is seen that a plurality of samples are generated, therefore this process iterates for each sample as well.): receiving an executable malicious sample (Yuste: pg. 3 3.1 Modification of samples “This procedure starts with a sample consisting in a functional malicious PE file. Each PE file can be divided in two parts: headers and sections. Headers contain the information necessary for the operating system to load the PE file in the memory of the computer. On the other hand, sections contain the code and data necessary for the correct execution of the PE file.” Pg. 4 left column “Fig. 2 a shows an example of an original PE file structure on disk and the associated memory mapping. As shown in the figure, sections of the PE file are loaded in memory in the same order as they are stored on disk.” See Fig. 4 step 1 wherein a new malicious PE file is obtained.); generating executable segments from the executable malicious sample (Yuste: pg. 4 right column “1. Parse the header of the PE file. 2. Extract the starting address of each section on disk. 3. Choose a target section t. Let A t be its starting address on disk.” Sections are generated from the original PE file.); determining one or more code caves as sparse spaces in the executable segments (Yuste: pg. 4 right column “4. Determine the size S t of the code cave to introduce before section t. 5. For every section i in the PE binary starting at a position A i such that A i ≥ A t , modify the starting address in the header of the PE file (i.e., modify the corresponding entry in the sections table ) such that the new starting address is A i= A i + S t. 6. Shift the content at disk of each section whose header has been modified in step 5 to its new starting address.” Code caves are determined between the segments.); inserting the generated payloads into at least one of the code caves of the executable malicious sample (Yuste: pg. 6 left column “When a solution needs to be evaluated, it is necessary to write the bytes which are currently in the solution, back into their corresponding unused blocks of the PE file. Then, the new PE file is provided as an input to the DL model to evaluate the solution (step 4). The quality of the solution is measured by the output of the DL model. Note that this is a value ranging from 0 to 1, where values close to 1 indicate malicious, while values close to 0 indicate benign.” The solution, i.e. the payload, is inserted back into the malicious PE file, the sample,), wherein the payloads are inserted in different code caves than those that received payloads in a previous iteration (Yuste: pg. 3 “The aforementioned method is summarized in Fig. 1 . First, we introduce a predefined unused space before each section in the PE file (step 1). Then, we test if the sample is detected by the target DL model (step 2). If not, the algorithm stops (step 7). Otherwise, we proceed to optimize the content of the introduced spaces (step 3). If the modified sample is not detected by the target DL after the optimization phase (step 4), and the size of the unused space is not larger than a predefined threshold which will be described in the following sections (step 5), we increase the size of the unused space (step 6) and we start a new iteration of the algorithm. This procedure is repeated until an AE is crafted (i.e., it evades the DL model) or the total size of the introduced space is larger than the predefined threshold.” The system iteratively generates the segments, determines code caves, generates the payloads, i.e. optimize the content step 3, and inserts them into the pe file, as seen in steps 10-11 in Fig. 4. Seen in Table 10 in Fig. 10, it is seen that a plurality of samples are generated, therefore this process iterates for each sample as well. Each payload is inserted into a different malicious PE file, therefore they are always inserted into a different code cave than a previous iteration. Additionally, pg. 4 left column “to the best of our knowledge, it has not yet been fully exploited to generate AEs for malware detection. The introduction of unused space has been only explored between the headers and the first section. In particular, we extend the current proposals by creating code caves in between sections. Furthermore, we propose a dynamic widening of the existing available space, depending on the binary explored, with the aim of constructing a successful AE”, and right column shows, varying the size of the unused space and location for each PE file.); outputting the executable sample with the inserted payloads (Yuste: pg. 6 left column “The optimization process starts by building an initial population of solutions (step 3). Each solution of the population is built by introducing a random byte in each position of an empty array with a length equal to the total unused space in bytes. The actual size of this population is an experimental parameter and it might vary among different algorithmic designs. Moreover, it is important to remark that the representation of a solution in this context is only valid for the tackled optimization problem. When a solution needs to be evaluated, it is necessary to write the bytes which are currently in the solution, back into their corresponding unused blocks of the PE file. Then, the new PE file is provided as an input to the DL model to evaluate the solution (step 4). The quality of the solution is measured by the output of the DL model. Note that this is a value ranging from 0 to 1, where values close to 1 indicate malicious, while values close to 0 indicate benign. Therefore, the lower the output, the better the solution. As it was described in the introduction of the GAs, once the population is conformed, it evolves into a new generation by applying the selection (step 6), crossover (step 7), and mutation (step 8) operators, until the stopping criterion (evaluated in step 5) is met. This stopping criterion is usually related to the execution time, to the number of generations, or to the quality of the obtained solutions in the population. We detail the stopping criterion used in Section 4 .” the samples are output in step 11 of Fig. 4). However Yuste does not explicitly disclose generating payloads based on patterns in the executable segments, the generated payloads are based on benign samples; and changing a labeling of the executable sample to benign. Smith discloses generating payloads based on patterns in the executable segments (Smith: col. 7 lines 30-35 “The saliency vector expresses the weights the surrogate model assigns to each byte in the binary or stripped binary during the system's malware classification process. The greater the weight, the more influence the byte has on the malware classification score. By this system, individual bytes are altered to reduce the malware classification score. The objective of the targeting engine is to minimize the amount of alterations that the alteration engine performs to achieve the greatest shift in the malware classification score that is at or below the predetermined detection threshold. Once the malware classification score drops to or falls below the predetermined detection threshold, the surrogate model classifies the file or file-less instances as not malicious despite its malicious functionality.” the system first determines which bytes in the pattern of bytes of the executable segments, i.e. the malicious binary, can be altered in order to minimize the amount of changes, i.e. injected benign code in Fig. 6 col. 9 lines 4-7 and Fig. 3 and fig. 6. Therefore the payload is generated based on patterns in the executable segments.), the generated payloads are based on benign samples (Smith: Fig. 6 col. 9 lines 4-7 “In FIG. 3, the malicious code is obfuscated by appending a large benign section of code and/or random and/or predetermined data to the malicious code.” col. 10 lines 18-30 “FIG. 6 is a block diagram restructuring code by code caving….This alteration engine's technique of injecting code avoids malware detection by dynamically introducing unused blocks of code 604 into the original malware executables 602.” A payload is generated that is known to be benign, or predetermined benign data, that is to be appended or injected into the code cave.) Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date to combine Yuste with Smith in order to incorporate generating payloads based on patterns in the executable segments, the generated payloads are based on benign samples. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of improving the training and accuracy of malware detection (Smith: col. 4 lines 13-50). However Yuste-Smith does not explicitly disclose and changing a labeling of the executable sample to benign. Chat discloses changing a labeling of the executable sample to benign (Chat: para.0005 “The embodiments may let the adversarial process determine whether each data sample (e.g., observed label) in the untrusted dataset may include a true label or may be reassigned with an estimated true label determined based on the trusted labels in the trusted dataset, the features of the data, and the observed label. … In some embodiments, the updated dataset with one or more reassigned labels for untrusted dataset and/or trusted dataset in the trusted dataset may be used to train the machine learning model” para.0075 “At step 506, the bias correction computing device 102 may update observed labels in the untrusted dataset based at least in part on trusted labels in the trusted dataset and injecting noise in the untrusted dataset. For example, adversarial algorithm 310 may inject noise in the untrusted dataset 308 based on trusted labels in the trusted dataset 306 to generate updated training data 316. Adversarial algorithm 310 may inject noise into the untrusted dataset 308 to reassign labels to the data samples in the untrusted dataset.” Noise may be introduced to untrusted dataset, i.e. adversarial training sample, to reassign the label to be trusted, i.e. benign.). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Yuste-Smith with that of Chat in order to incorporate changing a labeling of the executable sample to benign. Chat shows the process of adding data to untrusted samples in order to flip a label of the sample, similar to that of Yuste-Smith that injects bits into code caves until a sample flip from adversarial to benign, and it would be obvious to apply this process of Chat such that the new samples that used to be adversarial are then labeled as benign. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of improved accuracy of the machine learning model (Chat: para.0005). Regarding Claim 15, Yuste-Smith-Chat disclose claim 14 as set forth above. Yuste further discloses wherein generating the executable segments comprises performing a chunking function (Yuste: pg. 4 right column “1. Parse the header of the PE file. 2. Extract the starting address of each section on disk. 3. Choose a target section t. Let A t be its starting address on disk.” Sections are generated from the original PE file via a chunking function. Examiner notes that in para.0069, a chunking function is only described to result in segments from an initial file “block 304, the sample generator module 120 performs a chunking function to generate executable segments”). Regarding Claim 20, Yuste-Smith-Chat disclose claim 14 as set forth above. Yuste further discloses wherein determining the one or more code caves comprises sparse spaces that are greater than a predetermined size (Yuste: col.3 2.3 “We note that this approach to build AEs based on code caves refines previous work by handling together three related but separate tasks: (1) determining dynamically the smallest size needed to introduce one or more code caves for evading a ML detector (steps 1 and 6 in Fig. 1 );” the process determines the smallest possible size of a code cave for the final set of AEs, therefore determining code caves that are greater than this smallest size, predetermined size, for the finals AEs.). Claim(s) 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Yuste et al. (hereinafter Yuste, “Optimization of Code Caves in malware binaries to evade machine learning detectors”, NPL FEB 2022 attached.) in view of Smith et al. (hereinafter Smith, US 12/019,746 B1) in view of Chatterjee et al. (hereinafter Chat, US 2022/0179840 A1) in view of Chen (US 2019/0080089 A1). Regarding Claim 16, Yuste-Smith-Chat discloses claim 14 as set forth above. However Yuste-Smith-Chat does not explicitly disclose wherein the executable segments are in bytecode. Chen discloses wherein the executable segments are in bytecode (Chen: para.0024 “FIG. 2 illustrates a block diagram 200 showing malware evasion attack prevention in accordance with some embodiments. The block diagram 200 includes a preprocessing block 202 to receive input, such as binaries, function calls, files, malware, etc. In an example, the inputs at the preprocessing block 202 may include malware data, such as binaries, bytecode, executables, application packages (e.g., an Android package kit (APK)), to which n-gram may be applied to extract features. The preprocessing block 202 may separate labeled data from unlabeled data to generate a training data set and a testing data set.” The malware data that is to be preprocessed into training data includes bytecode.). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Yuste-Smith-Chat with Chen in order to perform the simple substitution of the unspecified language type of the segments in Yuste with bytecode in Chen. The simple substation of the unspecified language type of the segments in Yuste to wherein the executable segments are in bytecode of Chen would have been obvious to one of ordinary skill in the art because the substitution would have yield predictable results, namely the generation of training datasets (Yuste: pg. 2 left column, Chen: para.0024). Regarding Claim 17, Yuste-Smith-Chat -Chen discloses claim 16 as set forth above. Yuste further discloses wherein generating the payloads comprises inputting the executable segments into a generative machine learning model (Yuste: Fig. 4 pg. 6 left column “The optimization process starts by building an initial population of solutions (step 3). Each solution of the population is built by introducing a random byte in each position of an empty array with a length equal to the total unused space in bytes. The actual size of this population is an experimental parameter and it might vary among different algorithmic designs. Moreover, it is important to remark that the representation of a solution in this context is only valid for the tackled optimization problem. When a solution needs to be evaluated, it is necessary to write the bytes which are currently in the solution, back into their corresponding unused blocks of the PE file. Then, the new PE file is provided as an input to the DL model to evaluate the solution (step 4). The quality of the solution is measured by the output of the DL model. Note that this is a value ranging from 0 to 1, where values close to 1 indicate malicious, while values close to 0 indicate benign. Therefore, the lower the output, the better the solution. As it was described in the introduction of the GAs, once the population is conformed, it evolves into a new generation by applying the selection (step 6), crossover (step 7), and mutation (step 8) operators, until the stopping criterion (evaluated in step 5) is met. This stopping criterion is usually related to the execution time, to the number of generations, or to the quality of the obtained solutions in the population. We detail the stopping criterion used in Section 4 .” the payloads are tested by inputting into the Deep learning model), the generative machine learning model trained using segments of malicious and benign samples (Yuste: pg. 12 4.4, “In each case, we indicated to the network if the sample was benign or malicious… Then, we have tested the retrained network by performing three different experiments. First, we have evaluated the detection rate of the AEs not used for training purposes. Notice that none of those AEs are classified as malware when using the original network. However, when using the retrained network, 95 out of 99 files are detected as malicious. In our second experiment, we have applied our proposal to generate AEs from the original samples in the preliminary dataset, but targeting the retrained MalConv model instead of the original one. With the original network, our method was able to craft an AE from 99.00 % of the samples. However, when using the retrained MalConv network, the method was able to craft a new AE only from 21.00 % of the samples. Furthermore, the size needed to craft a successful AE was larger when targeting the retrained model (i.e., an average size of 5.80 % of the original sample in the case of the original network, versus 16.76 % in the case of the retrained one). Finally, the average running time increased from 685.25 s to 12,486.62 s.” the model is then retrained using the newly generated set of AEs). However Yuste-Smith-Chat does not explicitly disclose bytecode segments of malicious and benign samples. Chen discloses bytecode segments of malicious and benign samples (Chen: para.0024 “FIG. 2 illustrates a block diagram 200 showing malware evasion attack prevention in accordance with some embodiments. The block diagram 200 includes a preprocessing block 202 to receive input, such as binaries, function calls, files, malware, etc. In an example, the inputs at the preprocessing block 202 may include malware data, such as binaries, bytecode, executables, application packages (e.g., an Android package kit (APK)), to which n-gram may be applied to extract features. The preprocessing block 202 may separate labeled data from unlabeled data to generate a training data set and a testing data set.” The malware data that is to be preprocessed into training data includes bytecode.). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Yuste-Smith-Chat with Chen in order to perform the simple substitution of the unspecified language type of the segments in Yuste with bytecode in Chen. The simple substation of the unspecified language type of the segments in Yuste to wherein the executable segments are in bytecode of Chen would have been obvious to one of ordinary skill in the art because the substitution would have yield predictable results, namely the generation of training datasets (Yuste: pg. 2 left column, Chen: para.0024). Claim(s) 18 are rejected under 35 U.S.C. 103 as being unpatentable over Yuste et al. (hereinafter Yuste, “Optimization of Code Caves in malware binaries to evade machine learning detectors”, NPL FEB 2022 attached.) in view of Smith et al. (hereinafter Smith, US 12/019,746 B1) in view of Chatterjee et al. (hereinafter Chat, US 2022/0179840 A1) in view of Chen (US 2019/0080089 A1) in view of Barker (“Malware Detection in Executables using Neural Networks” NPL 2017 attached). Regarding Claim 18, Yuste-Smith-Chat-Chen discloses claim 17 as set forth above. However Yuste-Smith-Chat-Chen do not explicitly disclose wherein the executable segments substantially comprise ‘.data’ segments. Barker discloses wherein the executable segments comprise ‘.data’ segments (Barker: pg. 8 “Analysis of the sparse-CAM for 224 randomly selected binaries by an expert malware analyst revealed that our model had 39-42% of it’s most important features located outside of the file header. In particular we saw indications of both executable code and data containing discriminative features…. .data …Malicious…669…Benign…423” The system of Barker tests using .data type executable segments.). Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Yuste-Smith-Chat-Chen with Barker in order to incorporate wherein the executable segments comprise ‘.data’ segments. One of ordinary skill in the art would have been motivated to combine because of the expected benefit of improved malicious data detection by checking different parts of the file than just the header (Barker: pg. 8) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Raugas et al. US 2015/0128263 A1 see para.0006, para.0022, Fig. 2, malware detection using a plurality of models with an aggregate score. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EUI H KIM whose telephone number is (571)272-8133. The examiner can normally be reached 7:30-5 M-R, M-F alternating. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kamal B Divecha can be reached on 5712725863. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EUI H KIM/ EXAMINER, ART UNIT 2453 /KAMAL B DIVECHA/ SUPERVISORY PATENT EXAMINER, ART UNIT 2453
Read full office action

Prosecution Timeline

Jul 19, 2023
Application Filed
Dec 13, 2024
Non-Final Rejection — §103
Apr 16, 2025
Response Filed
Jul 24, 2025
Final Rejection — §103
Sep 29, 2025
Response after Non-Final Action
Oct 15, 2025
Request for Continued Examination
Oct 18, 2025
Response after Non-Final Action
Dec 31, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12549457
CREATING DECENTRALIZED MULTI-PARTY TRACEABILITY OF SLA USING A BLOCKCHAIN
2y 5m to grant Granted Feb 10, 2026
Patent 12519859
DETERMINING DATA MIGRATION STRATEGY IN HETEROGENEOUS EDGE NETWORKS
2y 5m to grant Granted Jan 06, 2026
Patent 12506818
METHOD AND SYSTEM FOR TIME SENSITIVE PROCESSING OF TCP SEGMENTS INTO APPLICATION LAYER MESSAGES
2y 5m to grant Granted Dec 23, 2025
Patent 12483462
Cloud Network Failure Auto-Correlator
2y 5m to grant Granted Nov 25, 2025
Patent 12470606
SYSTEMS AND METHODS FOR SCHEDULING FEATURE ACTIVATION AND DEACTIVATION FOR COMMUNICATION DEVICES IN A MULTIPLE-DEVICE ACCESS ENVIRONMENT
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
49%
Grant Probability
99%
With Interview (+52.9%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 156 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month