Prosecution Insights
Last updated: April 19, 2026
Application No. 18/208,027

SYSTEMS AND METHODS FOR A FULL-STACK OBFUSCATION FRAMEWORK TO MITIGATE NEURAL NETWORK ARCHITECTURE THEFT

Non-Final OA §103
Filed
Jun 09, 2023
Examiner
KATZ, DYLAN MICHAEL
Art Unit
3657
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Arizona Board of Regents
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
242 granted / 279 resolved
+34.7% vs TC avg
Strong +21% interview lift
Without
With
+20.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
45 currently pending
Career history
324
Total Applications
across all art units

Statute-Specific Performance

§101
7.7%
-32.3% vs TC avg
§103
50.0%
+10.0% vs TC avg
§102
20.3%
-19.7% vs TC avg
§112
16.5%
-23.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 279 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 11-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ohayon et al (US 20260017386, hereinafter Ohayon) in view of Luo et al (NNReArch: A Tensor Program Scheduling Framework Against Neural Network Architecture Reverse Engineering, hereinafter Luo). Regarding Claim 1, Ohayon teaches: 1. A system for obfuscating the architecture of a neural network (see at least "In some embodiments, the system continuously infers the adversarial capability regime that likely applies to a given interaction and adapts fortification behavior accordingly by distinguishing among white-box, black-box, and no-box attack contexts;" in par. 0317) , comprising: a processor in communication with a memory, the memory including instructions, which, when executed (see at least "Some embodiments provide a system comprising: one or more processors that are configured to execute code, wherein the one or more processors are operably associated with one or more memory units that are configured to store code. The system is configured to protect a Protected Engine" in par. 0142) , cause the processor to: access a neural network comprising a sequence of one or more layers, each layer in the sequence of one or more layers a plurality of dimension parameters (see at least “Then, the AI-based Model Fortification Unit 144 constructs a dataset of: (I) model features, such as, type, size, architecture features (e.g., layer sizes and types)” in par. 0106 and "In some embodiments, at least one of the Offline Protection Unit and the Online Protection Unit persistently instruments the Protected Engine during inference to interrogate internal operational states that are ordinarily opaque at the interface boundary, thereby capturing and analyzing intermediate activations across layers, attention distributions, gradient vectors and norms, and hidden-state tensors along the forward and backward computational pathways;" in par. 0314) ; access a plurality of obfuscation parameters for obfuscating the sequence of the one or more layers and the plurality of dimension parameters (see at least "gradient obfuscation and randomization schemes that preserve task accuracy while disrupting exploitative signal paths, stochastic parameterization and architecture randomization with per-request seeds anchored to hardware entropy, defensive distillation regimes that smooth decision surfaces in areas prone to adversarial amplification, and in-engine policy and parameter hardening enforced via signed configuration manifests and secure enclave attestation;" in par. 0320) ; and Ohayon does not appear to explicitly teach all of the following, but Luo does teach: obfuscate execution of the neural network, including application by the processor of a plurality of obfuscating operations to the neural network during an execution process of the neural network based on the plurality of obfuscation parameters such that an execution trace of the neural network is altered. (see at least " A designer can formulate the scheduling of DNN execution as an optimization problem. For a given neural network architecture NN, we can extract a set of workload expression E that executes on a target acceleration device. Then, for a given workload e ∈ E, we can implement it with many different functionally equivalent low-level program codes inducing different EM traces, as observed in Sec. III-D and III-E. Therefore, each workload could have multiple equivalent schedules, i.e., Opt-config. We use P se to denote the possible schedule space for e. For example, in VGG-19, there are 9 types of Conv2D layers with different Wop-configs, each of which is denoted as ei, i ∈ [1, 9] and has a set of Opt-config P sei” On page 5 and “For defense, one can use EM obfuscation. In details, a designer can follow Eq. 6 to calculate the OpCGEMM in each Conv2D layer and find all potential EM obfuscation Wop-configs and Opt-config, using Eq. 3 and 4. According the combination theory, the best choice of layers to apply EM obfuscation is , where l is the number total layers. Therefore, the designer can randomly select 8 layers out of the 16 layers of VGG-19 to obfuscate. Consequently, the searching space of brute force attack will be increased to a huge number, since the Conv2D configurations reflected in the EM trace does not help with reverse engineering at all. Further, as the Eq. 2 shows, S can be changed (elongated), which will affect the similarity calculation in brute force attack. It is hard for an attacker to find a unique correct Wop-config for the current layer. If they ignore S and only compare M, N, and wc, it will induce many possible Wop-configs.” On page 6 ) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system taught by Ohayon to incorporate the teachings of Luo wherein obfuscation parameters are used to alter the execution schedule of the neural network computations so that the EM trace does not help with reverse engineering of model parameters or architecture. The motivation to incorporate the teachings of Luo would be to make reverse engineering more difficult (see page 6). Regarding Claim 2, Ohayon as modified by Luo teaches: the system of claim 1, Ohayon further teaches: wherein the memory includes further instructions, which, when executed, cause the processor to: access a plurality of values for each obfuscation parameter in the plurality of obfuscation parameters (see at least "The optimization includes, for example, finding or detecting the best subset of defenses that are relevant and available, and choosing or setting or determining the best or most efficient defense parameter values. The optimization is multi-objective, since the defense would optimize robustness against different attacks in different scenarios along with the natural model accuracy" in par. 0104) ; access a time constraint for execution time of the neural network (see at least "The optimization is constrained since there are limitations on the output model's prediction running time and computational requirements, along with the fortification process duration and memory required for the fortification." in par. 0104) ; apply a profiling methodology to the neural network to generate a first profile (see at least "The model fortification is a multi-objective constrained optimization task. The task includes finding a model that is concurrently optimal in accuracy and parameters and also in its robustness to different threats, while being constrained by user requirements of prediction running time, prediction CPU usage and memory usage, or other constraints." in par. 0111) ; iteratively evaluate a metric for obfuscation by: selecting a value from the plurality of values for each obfuscation parameter, generating an obfuscated neural network from the neural network, applying the profiling methodology to the obfuscated neural network to generate a second profile, evaluating the metric for obfuscation based on the first profile and the second profile, output an updated obfuscated neural network based on the metric for obfuscation and the time constraint for execution time. (see at least "The system may thus utilize an algorithm that dynamically adjusts or modifies or re-configures the model itself, namely the ML/DL/AI Engine 101 itself and its internal model, in a dynamic or online manner (e.g., once per day; twice per week; or even once per hour, or at other time intervals) according to the changing requirements and changing values of the above-mentioned parameters" in par. 0111) Regarding Claim 3, Ohayon as modified by Luo teaches: The system of claim 1, Ohayon does not appear to explicitly teach all of the following, but Luo does teach: wherein the execution process includes a scripting step, an optimization step, and a scheduling step. (see at least " A designer can formulate the scheduling of DNN execution as an optimization problem. For a given neural network architecture NN, we can extract a set of workload expression E that executes on a target acceleration device. Then, for a given workload e ∈ E, we can implement it with many different functionally equivalent low-level program codes inducing different EM traces, as observed in Sec. III-D and III-E. Therefore, each workload could have multiple equivalent schedules, i.e., Opt-config. We use Pse to denote the possible schedule space for e. For example, in VGG-19, there are 9 types of Conv2D layers with different Wop-configs, each of whic is denoted as ei,i∈ [1,9] and has a set of Opt-config Psei. " on page 5 ). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system taught by Ohayon to incorporate the teachings of Luo wherein obfuscation parameters are found by designing an optimization problem, optimizing for the best schedule parameters, and then scheduling execution accordingly. The motivation to incorporate the teachings of Luo would be to make reverse engineering more difficult (see page 6). Regarding Claim 11, Ohayon as modified by Luo teaches: 11. The system of claim 1, the memory further including instructions, which, when executed, cause the processor to: Ohayon does not appear to explicitly teach all of the following, but Luo does teach: apply, at the processor, a schedule modification obfuscation operation in the plurality of obfuscation operations, to generate a plurality of different schedules for an operator of the neural network. (see at least " A designer can formulate the scheduling of DNN execution as an optimization problem. For a given neural network architecture NN, we can extract a set of workload expression E that executes on a target acceleration device. Then, for a given workload e ∈ E, we can implement it with many different functionally equivalent low-level program codes inducing different EM traces, as observed in Sec. III-D and III-E. Therefore, each workaload could have multiple equivalent schedules, i.e., Opt-config. We use P se to denote the possible schedule space for e. For example, in VGG-19, there are 9 types of Conv2D layers with different Wop-configs, each of which is denoted as ei, i ∈ [1, 9] and has a set of Opt-config P sei” On page 5 and “For defense, one can use EM obfuscation. In details, a designer can follow Eq. 6 to calculate the OpCGEMM in each Conv2D layer and find all potential EM obfuscation Wop-configs and Opt-config, using Eq. 3 and 4. According the combination theory, the best choice of layers to apply EM obfuscation is , where l is the number total layers. Therefore, the designer can randomly select 8 layers out of the 16 layers of VGG-19 to obfuscate. Consequently, the searching space of brute force attack will be increased to a huge number, since the Conv2D configurations reflected in the EM trace does not help with reverse engineering at all. Further, as the Eq. 2 shows, S can be changed (elongated), which will affect the similarity calculation in brute force attack. It is hard for an attacker to find a unique correct Wop-config for the current layer. If they ignore S and only compare M, N, and wc, it will induce many possible Wop-configs.” On page 6 ) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system taught by Ohayon to incorporate the teachings of Luo wherein obfuscation parameters are used to alter the execution schedule of the neural network computations so that the EM trace does not help with reverse engineering of model parameters or architecture. The motivation to incorporate the teachings of Luo would be to make reverse engineering more difficult (see page 6) Regarding Claim 12, Ohayon as modified by Luo also teaches: a method for implementing the system of Claim 1 (see Claim 1 analysis for rejection of the system) Regarding Claim 13, Ohayon as modified by Luo also teaches: a method for implementing the system of Claim 3 (see Claim 3 analysis for rejection of the system) Regarding Claim 14, Ohayon as modified by Luo also teaches: The method of claim 13, further comprising: Ohayon also teaches: applying, at the processor, a set of obfuscation knobs during the scripting step of the execution process that collectively obfuscate a layer sequence of the neural network model and dimensions of one or more layer operators of the neural network model. (see at least “Some embodiments may utilize a Protection Policy Configuration Unit 167 or similar design and configuration tool, which enables a user to configure, modify and optimize multiple dependent and independent parameters, such as model accuracy on natural inputs, model accuracy under each type of attack, computational resources for model prediction, and/or other parameters or constraints, in order to increase robustness of the ML/DL/AI Engine 101 against attacks or threats.” In par. 0129 and "In some embodiments, the protection platform or the protection system may include or may provide a “sandbox” experimentation tool or feature, which enables the user (e.g., the owner or administrator of the protected ML/DL/AI Engine) to experiment and explore and evaluate different protection methods or different protection schemes or different protection configurations, and/or different vulnerabilities or attacks, in a controlled or “sand-boxed” environment which does not cause actual damage to the protected ML/DL/AI Engine." in par. 0135 ). Claim(s) 4-8, 10, 16-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ohayon et al (US 20260017386, hereinafter Ohayon) in view of Luo et al (NNReArch: A Tensor Program Scheduling Framework Against Neural Network Architecture Reverse Engineering, hereinafter Luo) and Marson et al (US 20220197981, hereinafter Marson). Regarding Claim 4, Ohayon as modified by Luo teaches: 4. The system of claim 1, Ohayon and Luo do not appear to explicitly teach all of the following, but Marson does teach: the memory further including instructions, which, when executed, cause the processor to: increase, by a layer widening obfuscation operation in the plurality of obfuscation operations, a number of input or output channels in one or more layers of the neural network for increasing a number of memory accesses in the execution trace. (see at least “To obfuscate operations leading to the output y, NOE 105 may generate n−1 additional (dummy) weight vectors and expand (block 208) the weight vector {right arrow over (w)} into an n×p weight matrix” in par. 0029 " In some implementations, inconsequential operations may extend to an entire layer of dummy nodes. Dummy operations, dummy nodes, and dummy layers may not only make it more difficult for an attacker to identify parameters of nodes, but also obfuscate the topology (number of nodes, layers, edges) of the neural network." in par. 0068) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system taught by Ohayon as modified by Luo to incorporate the teachings of Marson wherein layer obfuscation is achieved by expanding the neural network layers with dummy nodes. The motivation to incorporate the teachings of Marson would be to make it more difficult for an attacker to discern the node parameters and the topology of the neural network (see par. 0068) Regarding Claim 5, Ohayon as modified by Luo teaches: 5. The system of claim 1, Ohayon and Luo do not appear to explicitly teach all of the following, but Marson does teach: the memory further including instructions, which, when executed, cause the processor to: increase, by a layer branching obfuscation operation in the plurality of obfuscation operations, a number of layer operators in the neural network for changing a volume of data accessed in the execution trace. (see at least " FIG. 6D illustrates an example implementation of dummy (inconsequential) outputs from a node that may be used for obfuscation of neural network operations. Shown is a node 630 that outputs multiple output values {O.sub.i} into nodes 631 and 632. Node 631 may use output values {O.sub.i} for real computations whereas node 632 may use the same output values for dummy computations. Accordingly, outputs {O.sub.i} are consequential inputs 633 when input into node 631 (the inputs affect the output of the neural network execution) but are inconsequential inputs 634 when input into node 632 (do not affect the output of the neural network execution). In some implementations, producing outputs that may be used as both consequential and inconsequential inputs into other nodes may include receiving node 630 inputs {x.sub.i} and determining node 630 weighted input value z (which, in some implementations, may be obfuscated using techniques disclosed above).” in par. 0072) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system taught by Ohayon as modified by Luo to incorporate the teachings of Marson wherein layer obfuscation is achieved by branching outputs of a node to real nodes and dummy nodes in the next layer. The motivation to incorporate the teachings of Marson would be to make it more difficult for an attacker to discern the node parameters and the topology of the neural network (see par. 0068) Regarding Claim 6, Ohayon as modified by Luo teaches: 6. The system of claim 1, the memory further including instructions, which, when executed, cause the processor to: Ohayon and Luo do not appear to explicitly teach all of the following, but Marson does teach: apply, by a dummy addition obfuscation operation in the plurality of obfuscation operations one or more additive identity operations to an output of one or more layers in the neural network for increasing a number of cache accesses by the execution trace. (see at least “Outputs of nodes 621 and 623 may additionally be input into one or more dummy nodes, such as various zero-output and constant output-nodes to make it more difficult for a potential attacker to identify the pass-through character of the cluster.” In par. 0072 and "a bias value of node 644 may be chosen to ensure that the output of the cancelling cluster, f.sub.4(z.sub.4+b), is always zero, although any other constant value may be output. Node 644 may have additional inputs (shown by arrows) whose contributions into the output of node 644 do not cancel. Accordingly, node 644 may serve both as an obfuscating node and an active node that contributes to NN operations." in par. 0075) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system taught by Ohayon as modified by Luo to incorporate the teachings of Marson wherein node pairs with canceling activation functions are added to create obfuscating dummy operations. The motivation to incorporate the teachings of Marson would be to make it more difficult for an attacker to discern the node parameters and the topology of the neural network (see par. 0068) Regarding Claim 7, Ohayon as modified by Luo teaches: 7. The system of claim 1, Ohayon and Luo do not appear to explicitly teach all of the following, but Marson does teach: the memory further including instructions, which, when executed, cause the processor to: insert, by a layer deepening obfuscation operation in the plurality of obfuscation operations, one or more computational layers in series with one or more existing layers in the neural network for increasing a number of computations in the execution trace. (see at least " In some implementations, inconsequential operations may extend to an entire layer of dummy nodes. Dummy operations, dummy nodes, and dummy layers may not only make it more difficult for an attacker to identify parameters of nodes, but also obfuscate the topology (number of nodes, layers, edges) of the neural network." in par. 0068) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system taught by Ohayon as modified by Luo to incorporate the teachings of Marson wherein entire layers of dummy nodes are added for obfuscation. The motivation to incorporate the teachings of Marson would be to make it more difficult for an attacker to discern the node parameters and the topology of the neural network (see par. 0068) Regarding Claim 8, Ohayon as modified by Luo teaches: The system of claim 1, the memory further including instructions, which, when executed, cause the processor to: Ohayon and Luo do not appear to explicitly teach all of the following, but Marson does teach: insert, by a layer skipping obfuscation operation in the plurality of obfuscation operations, one or more computational layers in parallel to one or more existing layers in the neural network for increasing a number of computations in the execution trace (see at least " In some implementations, multiple nodes having inconsequential inputs, which may include pass-through nodes, constant-output nodes, canceling nodes, or any other nodes described in relation to FIGS. 6A-E may be joined into multi-node clusters. In some implementations, an entire dummy layer of pass-through nodes may be formed to obfuscate a total number of layers in a NN and other features of the architecture of the NN." in par. 0078 ). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system taught by Ohayon as modified by Luo to incorporate the teachings of Marson wherein entire layers of pass-through nodes are used to obfuscate the actual number of layers in the model. The motivation to incorporate the teachings of Marson would be to make it more difficult for an attacker to discern the node parameters and the topology of the neural network (see par. 0068) Regarding Claim 10, Ohayon as modified by Luo teaches: 10. The system of claim 1, the memory further including instructions, which, when executed, cause the processor to: Ohayon and Luo do not appear to explicitly teach all of the following, but Marson does teach: apply, at the processor, a selective fusion obfuscation operation in the plurality of obfuscation operations to fuse one or more successive operators. (see at least " Node 644 may have additional inputs (shown by arrows) whose contributions into the output of node 644 do not cancel. Accordingly, node 644 may serve both as an obfuscating node and an active node that contributes to NN operations." in par. 0075) It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system taught by Ohayon as modified by Luo to incorporate the teachings of Marson wherein successive activation functions of nodes can selectively cancel to act as a dummy operation or not cancel to be part of real network computations. The motivation to incorporate the teachings of Marson would be to make it more difficult for an attacker to discern the node parameters and the topology of the neural network (see par. 0068) Regarding Claim 16, Ohayon as modified by Luo and Marson also teaches: a method for implementing the system of Claim 5 (see Claim 5 analysis for rejection of the system). Regarding Claim 17, Ohayon as modified by Luo and Marson also teaches: a method for implementing the system of Claim 6 (see Claim 6 analysis for rejection of the system). Regarding Claim 18, Ohayon as modified by Luo and Marson also teaches: a method for implementing the system of Claim 7 (see Claim 7 analysis for rejection of the system). Regarding Claim 19, Ohayon as modified by Luo and Marson also teaches: a method for implementing the system of Claim 8 (see Claim 8 analysis for rejection of the system). Allowable Subject Matter Claims 9, 15, 20 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: The closest prior art comes from Ohayon, Luo, and Marson for all claims. For Claims 9 and 20, the prior art does not appear to teach widening a kernel of a Conv2D operator to obfuscate the execution trace of the neural network in combination with all of the other limitations in the claims. For Claim 15, the prior art does not appear to teach adjusting more than one consecutive layer with a specific widening factor in combination with all of the other limitations in the claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DYLAN M KATZ whose telephone number is (571)272-2776. The examiner can normally be reached Mon-Thurs. 8:00-6:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abby Lin can be reached on (571) 270-3976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DYLAN M KATZ/Primary Examiner, Art Unit 3657
Read full office action

Prosecution Timeline

Jun 09, 2023
Application Filed
Feb 02, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596378
Autonomous Control and Navigation of Unmanned Vehicles
2y 5m to grant Granted Apr 07, 2026
Patent 12594663
ROBOT SYSTEM AND CART
2y 5m to grant Granted Apr 07, 2026
Patent 12589499
Mobile Construction Robot
2y 5m to grant Granted Mar 31, 2026
Patent 12589491
METHODS, SYSTEMS, AND DEVICES FOR MOTION CONTROL OF AT LEAST ONE WORKING HEAD
2y 5m to grant Granted Mar 31, 2026
Patent 12582491
CONTROL OF A SURGICAL INSTRUMENT HAVING BACKLASH, FRICTION, AND COMPLIANCE UNDER EXTERNAL LOAD IN A SURGICAL ROBOTIC SYSTEM
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+20.8%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 279 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month