Prosecution Insights
Last updated: April 19, 2026
Application No. 18/067,501

EFFICIENT PROTOTYPING OF ADVERSARIAL ATTACKS AND DEFENSES ON TRANSFER LEARNING SETTINGS

Final Rejection §103
Filed
Dec 16, 2022
Examiner
SHAUGHNESSY, AIDAN EDWARD
Art Unit
2432
Tech Center
2400 — Computer Networks
Assignee
DELL PRODUCTS, L.P.
OA Round
4 (Final)
38%
Grant Probability
At Risk
5-6
OA Rounds
3y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants only 38% of cases
38%
Career Allow Rate
3 granted / 8 resolved
-20.5% vs TC avg
Strong +71% interview lift
Without
With
+71.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
44 currently pending
Career history
52
Total Applications
across all art units

Statute-Specific Performance

§101
7.9%
-32.1% vs TC avg
§103
66.0%
+26.0% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
14.1%
-25.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 8 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendments / Arguments Regarding the rejection(s) of claims under 35 USC 103: Applicant’s arguments, filed 01/02/2026, in view of the amended claims, have been fully considered and are not persuasive. Applicant argues that the Xu reference does not teach "constructing a configuration for applying a set of attacks and a set of defenses to a machine learning model, the configuration specifying the set of attacks, the set of defenses and an optimizer" and "using the optimizer that applied the given defenses to the model to modify the selected defenses based on the evaluation metrics." For instance, Applicant argues that "Xu fails to disclose or suggest" these elements and that "at best in the cited portion of Xu, there is mention of applying adversarial attack methods for defense evaluation" without describing a systematic optimizer that modifies defenses based on evaluation metrics. In response, it is noted that Section 8 of Xu explicitly recites "we evaluate LanCeX in terms of effectiveness and efficiency against condensed adversarial attacks in three computation scenarios: image classification, object detection, and audio recognition" and describes systematic application of multiple attack methods (adversarial patch attacks, DPatch, FGSM, BIM, CW, Genetic attacks) against multiple defense methods (LanCeX variants, PM, NIC, PatchGuard, Information-based Defense, Dependency Detection, Noise Flooding) across various trained models (Inception-V3, VGG-13, ResNet-18, YOLO, AlexNet). This comprehensive experimental framework constitutes a "configuration for applying a set of attacks and a set of defenses to a machine learning model." Furthermore, Xu describes threshold optimization based on evaluation metrics, stating "When Tic equals 0.48, the corresponding Rd can achieve optimal performance" and "we can determine the thresholds for a targeted attack and untargeted attack as 0.41 and 8.1, respectively." This systematic parameter adjustment based on performance metrics (detection success rate, mAP, time cost) teaches "using the optimizer...to modify the selected defenses based on the evaluation metrics." The fact that Xu optimizes parameters within their defense method rather than selecting between entirely different defense architectures does not distinguish the claimed subject matter, as the claims broadly encompass any optimizer that modifies defenses based on evaluation metrics. Applicant's argument that Xu only describes "one specific defense method being tested against various attacks" rather than "a general system/framework for configuring, evaluating, selecting, and optimizing multiple different defense methods" fails to be patentably distinctive, as the claims do not require a "general framework" but merely recite functional limitations that are satisfied by Xu's methodology. Therefore, the identified claim language is considered to be taught by the Xu reference, and the rejection is maintained. Further, since Applicant has not presented additional arguments concerning the dependent claims, their rejections are likewise maintained. DETAILED ACTION This is a reply to the arguments filed on 01/02/2026, in which, claims 1, 5-8, 10-11, 15-17, and 19-26 are pending. Claims 1, 11, and 20 are independent. Claims 2-4, 9,12-14 and 18 are canceled. When making claim amendments, the applicant is encouraged to consider the references in their entireties, including those portions that have not been cited by the examiner and their equivalents as they may most broadly and appropriately apply to any particular anticipated claim amendments. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 5-8, 10-11, 15-17, 19-20 and 24-25 are rejected under 35 U.S.C. 103 as being unpatentable over Xu et al. (“LanCeX: A Versatile and Lightweight Defense Method against Condensed Adversarial Attacks in Image and Audio Recognition”, referred to as Xu), in view of Chen et al. (US 20220092407 A1, referred to as Chen) in further view of Liang et al. (“Detection Adversarial Image examples in Deep Neural Networks with Adaptive Noise Reduction”, referred to as Liang) In reference to claim 1, A system comprising: at least one processing device including a processor coupled to a memory; the at least one processing device being configured to implement the following steps: Constructing a configuration for applying a set of attacks and a set of defenses to a machine learning model, the configuration specifying the set of attacks, the set of defenses and an optimizer (Xu: Section 8 Provides for constructing systematic configurations for applying multiple attack methods including adversarial patch attacks, DPatch, FGSM, BIM, CW, and Genetic attacks against multiple defense methods including LanCeX variants, PM, NIC, PatchGuard, Information-based Defense, Dependency Detection, and Noise Flooding across various trained models. Section 5.1 provides for threshold optimization processes that function as optimizers to determine optimal defense parameters defining a set of evaluation metrics, each evaluation metric configured to test responses by the model when applying a given defense among a set of defenses against the set of adversarial inputs generated for the model (Xu: Section 3.3 provides for inference inconsistency metrics including "Input Semantic Inconsistency Metric" and "Prediction Activation Inconsistency Metric" which provide for testing model responses when defense methods are applied against adversarial inputs ); Selecting one or more defenses from the set of defenses by minimizing the evaluation metrics for each given defense (Xu: Section 4 provides for the defense methodology selection process, and Sections 5-7 provide for specific defense techniques being selected based on minimizing inconsistency metrics. Figure 15 provides for threshold selection optimization to maximize detection performance by minimizing inconsistencies); Using the evaluation metrics that applied the given defenses to the model to modify the selected defenses (Xu: Section 5.1 provides for how the threshold value Tic is determined based on the evaluation metrics, stating "the value range of threshold Tic for each class is between Dground_avg(i) and Dadv_avg(i)" providing for how metrics are used to modify defense parameters); Incorporating the modified selected defenses into the model to obtain a secured model (Xu: Sections 5-7 provide for incorporating defense methodologies including detection and data recovery techniques for various recognition scenarios, though Xu does not explicitly provide for obtaining a secured model); and Xu does not explicitly teach deploying the secured model to a distributed computing environment. However, Chen discloses: Deploying the secured model to a distributed computing environment (Chen: [0002], [0015], [0044]-[0047] and [0051] provide for generating an optimized model by incorporating optimized parameters and deploying models in distributed computing environments). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Xu, which provides for evaluating, selecting, and modifying defense mechanisms against adversarial attacks on machine learning models, with the teachings of Chen, which provides for generating and deploying an optimized model in distributed computing environments. One of ordinary skill in the art would recognize the ability to incorporate Chen's model generation and deployment process into Xu's defense selection and modification methodology to create a secured model for practical implementation. One of ordinary skill in the art would be motivated to make this modification in order to produce deployable, secured machine learning models with integrated defense mechanisms that provide effective protection against adversarial attacks in real-world distributed computing environments. Xu in view of Chen does not explicitly disclose "F-measures," it uses similar performance metrics like detection success rate and mean Average Precision (mAP) for evaluation across multiple defense methods. However, Liang discloses: Wherein the evaluation metrics includes F-measures (Liang: Section 3.2 Provides for valuation metrics including F1 score (an F-measure) to evaluate the defense method against adversarial examples.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Xu in view of Chen, which together provide a system for evaluating, selecting, and deploying secured machine learning models with defense mechanisms against adversarial attacks, with the teachings of Liang, which introduces F-measures as specific evaluation metrics for assessing defense effectiveness. One of ordinary skill in the art would recognize the ability to incorporate Liang's F-measure evaluation methodology into the combined defense evaluation system to provide standardized performance assessment. One of ordinary skill in the art would be motivated to make this modification in order to enable more comprehensive evaluation of defense mechanisms using widely recognized performance metrics. In reference to claim 5, The system of claim 1, wherein the model is a tuned model trained using transfer learning (Chen: [0002], [0015] [0044]-[0047] and [0051] Provides for transfer learning and describes a process for tuning models using transfer learning.) In reference to claim 6, The system of claim 1, wherein the secured model is a tuned model trained using transfer learning (Chen: [0002], [0015] [0044]-[0047] and [0051] Provides for creating a new model using transfer learning.) In reference to claim 7, The system of claim 6, wherein the secured model is tuned using transfer learning based on the model (Chen: [0002], [0015] [0044]-[0047] and [0051] Provides for tuning a new model using transfer learning based on a pre-existing model.) In reference to claim 8, The system of claim 1, wherein the model or the secured model is a deep neural network (DNN) (Xu: Section 8 Provides for uses of deep neural networks (DNNs) such as Inception-V3, VGG-13, and ResNet-18.) In reference to claim 10, The system of claim 1, wherein the processor is further configured to implement: visualizing at least one of: the adversarial inputs, one or more predictions generated by the model or by the secured model before applying each adversarial input, one or more predictions generated by the model or by the secured model after applying each adversarial input, and an accuracy of the model or the secured model (Xu: Section 3.2 Figure 5 Provides for various visualizations of adversarial inputs, attention maps, and results.) In reference to claim 11, A method comprising: Constructing a configuration for applying a set of attacks and a set of defenses to a machine learning model, the configuration specifying the set of attacks, the set of defenses and an optimizer (Xu: Section 8 Provides for constructing systematic configurations for applying multiple attack methods including adversarial patch attacks, DPatch, FGSM, BIM, CW, and Genetic attacks against multiple defense methods including LanCeX variants, PM, NIC, PatchGuard, Information-based Defense, Dependency Detection, and Noise Flooding across various trained models. Section 5.1 provides for threshold optimization processes that function as optimizers to determine optimal defense parameters defining a set of evaluation metrics, each evaluation metric configured to test responses by the model when applying a given defense among a set of defenses against the set of adversarial inputs generated for the model (Xu: Section 3.3 provides for inference inconsistency metrics including "Input Semantic Inconsistency Metric" and "Prediction Activation Inconsistency Metric" which provide for testing model responses when defense methods are applied against adversarial inputs); Selecting one or more defenses from the set of defenses by minimizing the evaluation metrics for each given defense (Xu: Section 4 provides for the defense methodology selection process, and Sections 5-7 provide for specific defense techniques being selected based on minimizing inconsistency metrics. Figure 15 provides for threshold selection optimization to maximize detection performance by minimizing inconsistencies); Using the optimizer that applied the given defenses to the model to modify the selected defenses based on the evaluation metrics (Xu: Section 5.1 provides for how the threshold value Tic is determined based on the evaluation metrics, stating "the value range of threshold Tic for each class is between Dground_avg(i) and Dadv_avg(i)" providing for how metrics are used to modify defense parameters); Incorporating the modified selected defenses into the model to obtain a secured model (Xu: Sections 5-7 provide for incorporating defense methodologies including detection and data recovery techniques for various recognition scenarios, though Xu does not explicitly provide for obtaining a secured model); and Xu does not explicitly teach deploying the secured model to a distributed computing environment. However, Chen discloses: Deploying the secured model to a distributed computing environment (Chen: [0002], [0015], [0044]-[0047] and [0051] provide for generating an optimized model by incorporating optimized parameters and deploying models in distributed computing environments). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Xu, which provides for evaluating, selecting, and modifying defense mechanisms against adversarial attacks on machine learning models, with the teachings of Chen, which provides for generating and deploying an optimized model in distributed computing environments. One of ordinary skill in the art would recognize the ability to incorporate Chen's model generation and deployment process into Xu's defense selection and modification methodology to create a secured model for practical implementation. One of ordinary skill in the art would be motivated to make this modification in order to produce deployable, secured machine learning models with integrated defense mechanisms that provide effective protection against adversarial attacks in real-world distributed computing environments. Xu in view of Chen does not explicitly disclose "F-measures," it uses similar performance metrics like detection success rate and mean Average Precision (mAP) for evaluation across multiple defense methods. However, Liang discloses: Wherein the evaluation metrics includes F-measures (Liang: Section 3.2 Provides for valuation metrics including F1 score (an F-measure) to evaluate the defense method against adversarial examples.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Xu in view of Chen, which together provide a system for evaluating, selecting, and deploying secured machine learning models with defense mechanisms against adversarial attacks, with the teachings of Liang, which introduces F-measures as specific evaluation metrics for assessing defense effectiveness. One of ordinary skill in the art would recognize the ability to incorporate Liang's F-measure evaluation methodology into the combined defense evaluation system to provide standardized performance assessment. One of ordinary skill in the art would be motivated to make this modification in order to enable more comprehensive evaluation of defense mechanisms using widely recognized performance metrics. In reference to claim 15, The method of claim 11, wherein the model is a tuned model trained using transfer learning (Chen: [0002], [0015] [0044]-[0047] and [0051] Provides for transfer learning and describes a process for tuning models using transfer learning.) In reference to claim 16, The method of claim 11, wherein the secured model is a tuned model trained using transfer learning (Chen: [0002], [0015] [0044]-[0047] and [0051] Provides for creating a new model using transfer learning.) In reference to claim 17, The method of claim 16, wherein the secured model is tuned using transfer learning based on the model (Chen: [0002], [0015] [0044]-[0047] and [0051] Provides for tuning a new model using transfer learning based on a pre-existing model.) In reference to claim 19, The method of claim 11, further comprising : visualizing at least one of: the adversarial inputs, one or more predictions generated by the model or by the secured model before applying each adversarial input, one or more predictions generated by the model or by the secured model after applying each adversarial input, and an accuracy of the model or the secured model (Xu: Section 3.2 Figure 5 Provides for various visualizations of adversarial inputs, attention maps, and results.) In reference to claim 20, A non-transitory processor-readable storage medium having stored thereon program code of one or more software programs, wherein the program code when executed by at least one processing device causes the at least one processing device to perform the following steps: Constructing a configuration for applying a set of attacks and a set of defenses to a machine learning model, the configuration specifying the set of attacks, the set of defenses and an optimizer (Xu: Section 8 Provides for constructing systematic configurations for applying multiple attack methods including adversarial patch attacks, DPatch, FGSM, BIM, CW, and Genetic attacks against multiple defense methods including LanCeX variants, PM, NIC, PatchGuard, Information-based Defense, Dependency Detection, and Noise Flooding across various trained models. Section 5.1 provides for threshold optimization processes that function as optimizers to determine optimal defense parameters defining a set of evaluation metrics, each evaluation metric configured to test responses by the model when applying a given defense among a set of defenses against the set of adversarial inputs generated for the model (Xu: Section 3.3 provides for inference inconsistency metrics including "Input Semantic Inconsistency Metric" and "Prediction Activation Inconsistency Metric" which provide for testing model responses when defense methods are applied against adversarial inputs); Selecting one or more defenses from the set of defenses by minimizing the evaluation metrics for each given defense (Xu: Section 4 provides for the defense methodology selection process, and Sections 5-7 provide for specific defense techniques being selected based on minimizing inconsistency metrics. Figure 15 provides for threshold selection optimization to maximize detection performance by minimizing inconsistencies); Using the optimizer that applied the given defenses to the model to modify the selected defenses based on evaluation metrics (Xu: Section 5.1 provides for how the threshold value Tic is determined based on the evaluation metrics, stating "the value range of threshold Tic for each class is between Dground_avg(i) and Dadv_avg(i)" providing for how metrics are used to modify defense parameters); Incorporating the modified selected defenses into the model to obtain a secured model (Xu: Sections 5-7 provide for incorporating defense methodologies including detection and data recovery techniques for various recognition scenarios, though Xu does not explicitly provide for obtaining a secured model); and Xu does not explicitly teach deploying the secured model to a distributed computing environment. However, Chen discloses: Deploying the secured model to a distributed computing environment (Chen: [0002], [0015], [0044]-[0047] and [0051] provide for generating an optimized model by incorporating optimized parameters and deploying models in distributed computing environments). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Xu, which provides for evaluating, selecting, and modifying defense mechanisms against adversarial attacks on machine learning models, with the teachings of Chen, which provides for generating and deploying an optimized model in distributed computing environments. One of ordinary skill in the art would recognize the ability to incorporate Chen's model generation and deployment process into Xu's defense selection and modification methodology to create a secured model for practical implementation. One of ordinary skill in the art would be motivated to make this modification in order to produce deployable, secured machine learning models with integrated defense mechanisms that provide effective protection against adversarial attacks in real-world distributed computing environments. Xu in view of Chen does not explicitly disclose "F-measures," it uses similar performance metrics like detection success rate and mean Average Precision (mAP) for evaluation across multiple defense methods. However, Liang discloses: Wherein the evaluation metrics includes F-measures (Liang: Section 3.2 Provides for valuation metrics including F1 score (an F-measure) to evaluate the defense method against adversarial examples.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Xu in view of Chen, which together provide a system for evaluating, selecting, and deploying secured machine learning models with defense mechanisms against adversarial attacks, with the teachings of Liang, which introduces F-measures as specific evaluation metrics for assessing defense effectiveness. One of ordinary skill in the art would recognize the ability to incorporate Liang's F-measure evaluation methodology into the combined defense evaluation system to provide standardized performance assessment. One of ordinary skill in the art would be motivated to make this modification in order to enable more comprehensive evaluation of defense mechanisms using widely recognized performance metrics. In reference to claim 24, The method of claim 18, wherein the model is configured for image segmentation (Chen: [0018] and [0053] Provides for processing images and extracting meaningful information showing the method can be applied to various image-based machine learning tasks.) In reference to claim 25, The non-transitory processor-readable storage medium of claim 20, wherein the set of attacks includes Fast Gradient Sign Method (FGSM) attacks (Xu: Section 8.3 Provides for evaluating their defense method against Fast Gradient Sign Method (FGSM) attacks)\ Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Xu et al. (“LanCeX: A Versatile and Lightweight Defense Method against Condensed Adversarial Attacks in Image and Audio Recognition”, referred to as Xu), in view of Chen et al. (US 20220092407 A1, referred to as Chen) in further view of Liang et al. (“Detection Adversarial Image examples in Deep Neural Networks with Adaptive Noise Reduction”, referred to as Liang) in further view of Alzantot et al. (“GenAttack: Practical Black-box Attacks with Gradient-Free”, referred to as Alzantot.) In reference to claim 21, The system of claim 1, wherein the optimizer is a genetic algorithm (Alzantot: Page 4, Section 4, GenAttack Algorithm Provides for using genetic algorithms as an optimization method.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Xu in view of Chen and Liang, which together provide a system for evaluating, selecting, and deploying secured machine learning models with standardized F-measure assessment of defense mechanisms, with the teachings of Alzantot, which introduces genetic algorithms as an optimization method. One of ordinary skill in the art would recognize the ability to incorporate Alzantot's genetic algorithm approach into the combined defense optimization system to enhance the efficiency of defense parameter selection. One of ordinary skill in the art would be motivated to make this modification in order to improve the optimization process for defense mechanisms by leveraging genetic algorithms' ability to explore large parameter spaces efficiently. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 22-23 and 26 is rejected under 35 U.S.C. 103 as being unpatentable over Xu et al. (“LanCeX: A Versatile and Lightweight Defense Method against Condensed Adversarial Attacks in Image and Audio Recognition”, referred to as Xu), in view of Chen et al. (US 20220092407 A1, referred to as Chen) in further view of Liang et al. (“Detection Adversarial Image examples in Deep Neural Networks with Adaptive Noise Reduction”, referred to as Liang) in further view of Shah et al. (US 20240386095 A1, referred to as Shah) In reference to claim 22, The method of claim 18, wherein the optimizer is Bayesian optimization (Shah: [0015]-[0021] Provides for "Bayesian Optimization" as the optimizer used in its method.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Xu in view of Chen and Liang, which together provide a system for evaluating, selecting, and deploying secured machine learning models with standardized F-measure assessment of defense mechanisms, with the teachings of Shah, which introduces Bayesian optimization as an optimizer method. One of ordinary skill in the art would recognize the ability to incorporate Shah's Bayesian optimization approach into the combined defense optimization system to enhance the efficiency and effectiveness of defense parameter selection. One of ordinary skill in the art would be motivated to make this modification in order to improve the optimization process by leveraging Bayesian optimization's ability to efficiently explore parameter spaces with fewer evaluations In reference to claim 23, The method of claim 18, wherein the optimizer is selected to minimize the evaluation metric (Shah: [0015]-[0021] Provides for defining an "optimizing function" with "boundary conditions for the maxima or minima" of parameters, indicating the optimization is aimed at minimizing or maximizing some metric.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Xu in view of Chen and Liang, which together provide a system for evaluating, selecting, and deploying secured machine learning models with standardized F-measure assessment of defense mechanisms, with the teachings of Shah, which introduces optimization functions designed to minimize or maximize evaluation metrics. One of ordinary skill in the art would recognize the ability to incorporate Shah's minimization-focused optimization approach into the combined defense selection system to ensure that defense mechanisms are optimized for best performance. One of ordinary skill in the art would be motivated to make this modification in order to create a systematic approach to defense optimization. In reference to claim 26, The non-transitory processor-readable storage medium of claim 20, wherein the set of attacks includes mimicking attacks (Shah: [0013]-[0015] Provides for "mimicking attacks" in substantial detail. It explicitly explains how attackers query a model to generate input-output pairs, which they then use to train a new model that mimics or replicates the original model's behavior.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Xu in view of Chen and Liang, which together provide a system for evaluating, selecting, and deploying secured machine learning models with standardized F-measure assessment of defense mechanisms, with the teachings of Shah, which introduces mimicking attacks as a specific type of adversarial attack. One of ordinary skill in the art would recognize the ability to incorporate Shah's mimicking attack scenarios into the combined defense evaluation system to ensure comprehensive security testing. One of ordinary skill in the art would be motivated to make this modification in order to protect against model extraction attacks where adversaries attempt to replicate proprietary models through query-based learning. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See PTO-892. Applicant’s amendment necessitated the new ground(s) of rejection presented in this office action. Accordingly, THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AIDAN EDWARD SHAUGHNESSY whose telephone number is (703)756-1423. The examiner can normally be reached on Monday-Friday from 7:30am to 5pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jeffrey Nickerson, can be reached at telephone number (469) 295-9235. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR for authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/usptoautomated-interview-request-air-form. /A.E.S./Examiner, Art Unit 2432 /Jeffrey Nickerson/Supervisory Patent Examiner, Art Unit 2432
Read full office action

Prosecution Timeline

Dec 16, 2022
Application Filed
Oct 01, 2024
Non-Final Rejection — §103
Jan 15, 2025
Response Filed
Mar 21, 2025
Final Rejection — §103
Jul 02, 2025
Request for Continued Examination
Jul 08, 2025
Response after Non-Final Action
Sep 30, 2025
Non-Final Rejection — §103
Jan 02, 2026
Response Filed
Jan 20, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12574412
METHOD AND SYSTEM FOR PROCESSING AUTHENTICATION REQUESTS
2y 5m to grant Granted Mar 10, 2026
Patent 12339956
ENDPOINT ISOLATION AND INCIDENT RESPONSE FROM A SECURE ENCLAVE
2y 5m to grant Granted Jun 24, 2025
Patent 12225029
AUTOMATIC IDENTIFICATION OF ALGORITHMICALLY GENERATED DOMAIN FAMILIES
2y 5m to grant Granted Feb 11, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
38%
Grant Probability
99%
With Interview (+71.4%)
3y 7m
Median Time to Grant
High
PTA Risk
Based on 8 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month