Prosecution Insights
Last updated: April 19, 2026
Application No. 18/624,684

TRANSFER LEARNING AND DEFENDING MODELS AGAINST ADVERSARIAL ATTACKS

Final Rejection §103
Filed
Apr 02, 2024
Examiner
GILLESPIE, KAMRYN JORDAN
Art Unit
2408
Tech Center
2400 — Computer Networks
Assignee
DELL PRODUCTS, L.P.
OA Round
2 (Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
16 granted / 22 resolved
+14.7% vs TC avg
Strong +50% interview lift
Without
With
+50.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
17 currently pending
Career history
39
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
44.9%
+4.9% vs TC avg
§102
26.4%
-13.6% vs TC avg
§112
14.4%
-25.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 22 resolved cases

Office Action

§103
Detailed Action This communication is in respond to applicant's claims filed on 12/29/2025. Claims 1-10 and 11-20 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 04/02/2024 appears to be in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Arguments Applicant’s arguments filed 12/29/2025 have been fully considered, but are considered moot in view of the following new ground of rejection. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) (1-5, 9-10) and (11-15, 19-20) are rejected under 35 U.S.C. 103 as being unpatentable over NIE (US 20240104698 A1), hereafter NIE, in view of BEN ARIE (US 20240256984 A1), hereafter BEN. Regarding claim 1, NIE teaches: A method (NIE [0070] “In at least one embodiment, a two-step adversarial purification method can be utilized with one or more using diffusion models.”) for transfer learning(NIE [0576] “In at least one embodiment, customer dataset 4006 (e.g., imaging data, genomics data, sequencing data, or other data types generated by devices at a facility) may be used to perform model training 3614 (which may include, without limitation, transfer learning) on initial model 4004 to generate refined model 4012.”), comprising: obtaining a dataset to train, by a teacher model, a pool of student models (NIE [0515] “In at least one embodiment, a training pipeline 3704 (FIG. 37) may include a scenario where facility 3602 is training their own machine learning model, or has an existing machine learning model that needs to be optimized or updated. In at least one embodiment, imaging data 3608 generated by imaging device(s), sequencing devices, and/or other device types may be received. In at least one embodiment, once imaging data 3608 is received, AI-assisted annotation 3610 may be used to aid in generating annotations corresponding to imaging data 3608 to be used as ground truth data for a machine learning model. In at least one embodiment, AI-assisted annotation 3610 may include one or more machine learning models (e.g., convolutional neural networks (CNNs)) that may be trained to generate annotations corresponding to certain types of imaging data 3608 (e.g., from certain devices) and/or certain types of anomalies in imaging data 3608…In at least one embodiment, AI-assisted annotations 3610, labeled clinic data 3612, or a combination thereof may be used as ground truth data for training a machine learning model. In at least one embodiment, a trained machine learning model may be referred to as an output model 3616, and may be used by deployment system 3606, as described herein.”); determining an initial accuracy for each of the pool of student models using the dataset(NIE [0579] “In at least one embodiment, customer dataset 4006 may be applied to initial model 4004 any number of times, and ground truth data may be used to update parameters of initial model 4004 until an acceptable level of accuracy is attained for refined model 4012.”); configuring each of the pool of student models with a defense to an attack (NIE [0059] “In at least one embodiment, this diffusion model can add small (or at least determined) amounts of noise, such as Gaussian noise, to adversarial image 202 over a number of iterations. In at least one embodiment, this can cause adversarial image 202 (or versions of that image) to have increasing amounts of noise (or more diffuse pixel data) present over this iterative noise addition sequence 208. In at least one embodiment, this can result in an intermediate image, or diffused image 204, that has an amount of noise that is sufficient to eliminate, or at least significantly reduce, a presence of perturbations in this image.”, [0067] “In at least one embodiment, diffusion models can be trained with a weighted combination of denoising score matching (DSM) across multiple time steps, as may be given by:”); determining an attack accuracy for each of the pool of student models using an attack dataset (NIE [0097] “In at least one embodiment, training framework 804 includes tools to monitor how well untrained neural network 806 is converging towards a model, such as trained neural network 808, suitable to generating correct answers, such as in result 814, based on input data such as a new dataset 812…In at least one embodiment, training framework 804 trains untrained neural network 806 until untrained neural network 806 achieves a desired accuracy.”); selecting a subset of the pool of student models to form a model based on the attack accuracies(NIE [0097] “In at least one embodiment, training framework 804 trains untrained neural network 806 until untrained neural network 806 achieves a desired accuracy. In at least one embodiment, trained neural network 808 can then be deployed to implement any number of machine learning operations.”, [0579] “In at least one embodiment, customer dataset 4006 may be applied to initial model 4004 any number of times, and ground truth data may be used to update parameters of initial model 4004 until an acceptable level of accuracy is attained for refined model 4012.”, [0523] “In at least one embodiment, a requesting entity (e.g., a user at a medical facility)—who provides an inference or image processing request—may browse a container registry and/or model registry 3624 for an application, container, dataset, machine learning model, etc., select a desired combination of elements for inclusion in data processing pipeline, and submit an imaging processing request. In at least one embodiment, a request may include… a selection of application(s) and/or machine learning models to be executed in processing a request.”); and performing an optimization loop when the aggregate accuracy is outside the threshold (NIE [0097] “In at least one embodiment, training framework 804 trains untrained neural network 806 until untrained neural network 806 achieves a desired accuracy.”, [0575] “In at least one embodiment, pre-trained model 3706 may not be optimized for generating accurate results on customer dataset 4006 of a facility of a user (e.g., based on patient diversity, demographics, types of medical imaging devices used, etc.). In at least one embodiment, prior to deploying pre-trained model 3706 into deployment pipeline 3710 for use with an application(s), pre-trained model 3706 may be updated, retrained, and/or fine-tuned for use at a respective facility.”). Further regarding claim 1, NIE teaches the limitations previously demonstrated, however fails to explicitly teach the following limitations demonstrated by BEN: selecting a subset of the pool of student models to form an ensemble model (BEN “[0058] In one embodiment, the set of component models for the abridged model are selected from the component models for the ensemble model. The selection may use a component cardinality of the set of component models, a deviation threshold, an accuracy threshold, and the plurality of training scores generated for the component models of the ensemble model.” The abridged model serves as a substitution for the ensemble model, as it is merely an abridged ensemble model. Further, BEN discloses to “use the abridged model in lieu of the original model when the abridged model maintains sufficient accuracy,” as the “abridged model, being a subset of the ensemble model, uses fewer compute resources and may generate outputs more quickly than the ensemble model.”) determining an aggregate accuracy of the ensemble model based on attack accuracies of the subset of student models using the attack dataset (BEN [0038] “The accuracy threshold (165) is a threshold value that identifies an amount of accuracy the abridged model (121) is to achieve before being used in lieu of the ensemble model (133).”); and deploying the ensemble model when the aggregate accuracy is at least within a threshold of the initial accuracy (BEN [0060] “The abridged model that satisfies the accuracy threshold with the lowest “k” and the highest “s” is selected as the abridged model for the ensemble model.”) Since both NIE and BEN are from the same field of endeavor as both are directed to automated classification and detection of adversarial activity, which is within the same field of endeavor as the claimed invention, it would have been obvious to one skilled in the art before the effective filing date of the claimed invention to modify and combine the teachings of NIE and BEN by incorporating the teachings of BEN into NIE for automating classification and detection of adversarial information as claimed. The motivation to combine is to improve detection and classification of adversarial activity (NIE [AB]; BEN [AB]). This motivation for combination of references is equally applicable to rejections hereafter. Regarding claim 2, NIE-BEN teaches: The method of claim 1, wherein the defense is configured to disrupt the attack (NIE [0059] “In at least one embodiment, an input image can be provided that can be an adversarial image 202, or input image to which an attacker (or other application or entity) has introduced one or more perturbations. In at least one embodiment, this diffusion model can add small (or at least determined) amounts of noise, such as Gaussian noise, to adversarial image 202 over a number of iterations. In at least one embodiment, this can cause adversarial image 202 (or versions of that image) to have increasing amounts of noise (or more diffuse pixel data) present over this iterative noise addition sequence 208. In at least one embodiment, this can result in an intermediate image, or diffused image 204, that has an amount of noise that is sufficient to eliminate, or at least significantly reduce, a presence of perturbations in this image.”). Regarding claim 3, NIE-BEN teaches: The method of claim 2, wherein the attack includes noise added to the dataset, wherein the defense is configured to prevent the attack from succeeding by altering the noise(NIE [0059] “In at least one embodiment, an input image can be provided that can be an adversarial image 202, or input image to which an attacker (or other application or entity) has introduced one or more perturbations. In at least one embodiment, this diffusion model can add small (or at least determined) amounts of noise, such as Gaussian noise, to adversarial image 202 over a number of iterations. In at least one embodiment, this can cause adversarial image 202 (or versions of that image) to have increasing amounts of noise (or more diffuse pixel data) present over this iterative noise addition sequence 208. In at least one embodiment, this can result in an intermediate image, or diffused image 204, that has an amount of noise that is sufficient to eliminate, or at least significantly reduce, a presence of perturbations in this image.”, [0062] “In at least one embodiment, new data points (e.g., pixel values) can be generated for each iteration, which can effectively remove perturbations or purify noise in samples after a sufficient number of iterations.). Regarding claim 4, NIE-BEN teaches: The method of claim 1, wherein the dataset includes images, wherein the defense is configured to alter pixels in the images(NIE [0059] “this diffusion model can add small (or at least determined) amounts of noise, such as Gaussian noise, to adversarial image 202 over a number of iterations. In at least one embodiment, this can cause adversarial image 202 (or versions of that image) to have increasing amounts of noise (or more diffuse pixel data) present over this iterative noise addition sequence 208. In at least one embodiment, this can result in an intermediate image, or diffused image 204). Regarding claim 5, NIE-BEN teaches: The method of claim 4, wherein the defense is configured to drop out different pixels in each of an image's channels, is configured to drop a same pixels in the image's channels, or drop pixels in an image's border (NIE [0059] “this diffusion model can add small (or at least determined) amounts of noise, such as Gaussian noise, to adversarial image 202 over a number of iterations. In at least one embodiment, this can cause adversarial image 202 (or versions of that image) to have increasing amounts of noise (or more diffuse pixel data) present over this iterative noise addition sequence 208... In at least one embodiment, reconstruction can include removing an amount of noise over each of a sequence of image generation iterations, or noise removal iterations 210.”). Regarding claim 9, NIE-BEN teaches: The method of claim 1, further comprising randomly configuring the defense applied to each of the models, wherein the configuration of the defense includes a percentage of pixels, a type of drop out, and a channel selection (NIE [0080] “noise (e.g., Gaussian noise) can be added 406 to this image. In at least one embodiment, this can include modifying pixel values for random pixel locations according to a determined Gaussian distribution.”). Regarding claim 10, NIE-BEN teaches: The method of claim 1, wherein the attack is an adversarial attack, wherein each of the models is configured with a different defense (NIE [0060] “In at least one embodiment, this diffusion process can take advantage of a stochastic differential equation (SDE)-based approach. In at least one embodiment, a different diffusion process can be utilized that can add noise or otherwise modify pixel data in an image in a diffuse manner.”) and wherein the ensemble model is configured to defend against one or more attacks (NIE [0083] “In at least one embodiment, such functionality can be performed by a purifier 630 to remove unauthorized modifications, such as adversarial perturbations, from one or more images,”). Regarding claims 11-15 & 19-20, claims 11-15 & 19-20 recite substantially similar limitations as claims 1-5 & 9-10, but for recitation in the form of a non-transitory storage medium. NIE-BEN further teaches: A non-transitory storage medium (BEN [0076] “Software instructions in the form of computer readable program code to perform embodiments may be stored, in whole or in part, temporarily or permanently, on a computer program product that includes a non-transitory computer readable medium such as a CD”) Claim(s) (6-8) and (16-18) are rejected under 35 U.S.C. 103 as being unpatentable over NIE-BEN, in further view of JAN (US 20240232335 A1), hereafter JAN. Regarding claim 6, NIE-BEN in view of JAN teaches: The method of claim 5, further comprising initializing the ensemble model with a target dataset, a set of attacks (NIE[0080] “In at least one embodiment, an image can be received 402 (or otherwise obtained) that may contain one or more adversarial perturbations, or other such unauthorized modifications…In at least one embodiment, this image can be provided 404 as input to a diffusion network.”), a set of defenses (NIE [0080] “In at least one embodiment, over each of a number of forward iterations through layers of this diffusion network, noise (e.g., Gaussian noise) can be added 406 to this image…In at least one embodiment, a number of iterations can be determined ahead of time that can be sufficient to wash out these perturbations while retaining sufficient semantic structure for this image.”, [0579] “In at least one embodiment, customer dataset 4006 may be applied to initial model 4004 any number of times, and ground truth data may be used to update parameters of initial model 4004 until an acceptable level of accuracy is attained for refined model 4012.”), a list of student models (JAN [0022] The model determination apparatus 1 is configured to generate a plurality of candidate models CM based on the training data TD.), a maximum number of models in a pool of models (JAN [0039] “the processor 12 of the model determination apparatus 1 can sort the candidate models CM based on the first accuracy and the second accuracy of each of the candidate models CM and select at least one of the candidate models CM with the highest first accuracy and/or the highest second accuracy as the output model.”, [0054] “In some embodiments, the step of selecting the at least one output model further comprising: selecting a first candidate model having a highest first accuracy and a second candidate model having a highest second accuracy as the at least one output model from the candidate models.”), and a threshold accepted accuracy(BEN [0061] “In one embodiment, the component cardinality and the deviation threshold are tuned meet the accuracy threshold. After the ensemble model is trained, multiple abridged models are identified using different “k” values and tested using different “s” values for each of the multiple abridged models. The abridged model that satisfies the accuracy threshold with the lowest “k” and the highest “s” is selected as the abridged model to be used in lieu of the ensemble model”). Regarding claim 7, NIE-BEN in view of JAN teaches: The method of claim 1, wherein the optimization loop includes adding models to the pool(JAN [0022] “The model determination apparatus 1 is configured to generate a plurality of candidate models CM based on the training data TD.”), determining an attack accuracy for the models in the pool (JAN [0030] “After generating multiple the candidate models CM, the model determination apparatus 1 further evaluates the recognition accuracy of the candidate models”) and generating a new ensemble model (BEN [0010] “The abridged model, being a subset of the ensemble model, uses fewer compute resources and may generate outputs more quickly than the ensemble model.”, [0027] “The training application (141) trains the ensemble model (133) using the update controller (151) and generates the abridged model (121) using the abridged model controller.”). Regarding claim 8, NIE-BEN in view of JAN teaches: The method of claim 7, further comprising generating the attacking attack dataset(JAN [0018] “In some embodiments, the training data TD and the validation data AD can be labeled images when the model determination apparatus 1 is configured to train an image recognition model.”, [0024] “In some embodiments, the first adversarial attack adjustment comprises the processor 12 generating a first noise based on the initial model IM by using an adversarial attack function; and the processor 12 generating the adversarial training data TD′ based on the training data TD and the first noise.”). Regarding claims 16-18, claims 16-18 recite substantially similar limitations as claims 6-8, but for recitation in the form of a non-transitory storage medium. NIE-BEN in view of JAN further teaches: A non-transitory storage medium (BEN [0076] “Software instructions in the form of computer readable program code to perform embodiments may be stored, in whole or in part, temporarily or permanently, on a computer program product that includes a non-transitory computer readable medium such as a CD”) Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Kamryn Gillespie whose telephone number is 703-756-5498. The examiner can normally be reached on Monday through Thursday from 9am to 6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Linglan Edwards can be reached on (571) 270-5440. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pairdirect.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.J.G./Examiner, Art Unit 2408 /LINGLAN EDWARDS/Supervisory Patent Examiner, Art Unit 2408
Read full office action

Prosecution Timeline

Apr 02, 2024
Application Filed
Oct 06, 2025
Non-Final Rejection — §103
Dec 29, 2025
Response Filed
Mar 04, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596795
DETECTING A CURRENT ATTACK BASED ON SIGNATURE GENERATION TECHNIQUE IN A COMPUTERIZED ENVIRONMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12596796
Self-synchronous Side-Channel Attack Countermeasure
2y 5m to grant Granted Apr 07, 2026
Patent 12554859
GENERATING 3-DIMENSIONAL MODELS AND CONNECTIONS TO PROVIDE VULNERABILITY CONTEXT
2y 5m to grant Granted Feb 17, 2026
Patent 12518004
MITIGATING POINTER AUTHENTICATION CODE (PAC) ATTACKS IN PROCESSOR-BASED DEVICES
2y 5m to grant Granted Jan 06, 2026
Patent 12511376
METHOD, SYSTEM, AND TECHNIQUES FOR PREVENTING ANALOG DATA LOSS
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+50.0%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 22 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month