DETAILED ACTION
The objection to the specification is withdrawn based on the amendments filed 11/19/2025.
The objection to the claim is withdrawn based on the amendments filed 11/19/2025.
Claims 1-19 are rejected.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 11/19/2025 have been fully considered but they are not persuasive.
Regarding applicant’s argument that claim 1 has been amended to overcome the 101 rejection, Examiner respectfully disagrees. The claim limitation “finding a vulnerability in the software code based on said performing” is not integrated into a practical application because there is no recitation of mitigation or action being done in response to finding the vulnerability. Therefore, claims 1-19 remains rejected.
Regarding applicant’s argument that Arakelyan does not teach “looping through multiple execution states within a training epoch, Examiner respectfully disagrees. In Arakelyan section 3.2 Graph Convolutional Networks, Arakelyan teaches for-looping through each node and performing backpropagation. Arakelyan further explains in section 3.1 program graphs, that a binary program is disassembled and constructed into a control flow graph. A control flow graph is a directed graph that shows all potential execution paths of a program. Arakelyan further elaborates in Figure 4 that the program graph is input into a graph convolution layer, where each execution state is converted into node representation. For that reason, Arakelyan teaches the claim limitation “looping through multiple execution states within a training epoch”.
Regarding applicant’s argument that Arakelyan does not teach “sampling over the distribution of the multiple execution states and retrieving a maximum value”, Examiner respectfully disagrees. As mentioned above, Arakelyan teaches a control flow graph which is a directed graph that shows all potential execution paths of a program. Arakelyan also teaches, in section 3.2 Graph Convolutional Networks, “sampling over the distribution of the multiple execution states and retrieving a maximum value” by aggregating the representation of the entire graph using a simple sum aggregate.
For the above reasons, the rejection of claims 2, 3, 6-9, 11, 12, 15-18 are maintained.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 1-19 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding claims 1, 10, and 19, the claims recite the limitation “the distribution”. There is insufficient antecedent basis for this limitation in the claim. Claims 2-9 and 11-18 inherit this rejection.
Regarding claim 9 and 18, the claims are rejected as being indefinite because it is not clear what the scope of “concrete” is in “concrete execution path”.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claims 1, 10, 19 recite pre-processing input for a learning model, manipulating the input through the learning model, computing an output for the learning model and performing a prediction task.
Claims 2, 11 recite pre-processing the input and manipulating the learning model.
Claims 3-4, 12-13 recite manipulating the learning model.
Claims 5-6, 14-15 recite additional manipulation of the learning model and mathematical computations.
Claims 7-9, 16-18 recite analysis and manipulation of data.
These are all mental steps and therefore they are abstract. This judicial exception is not
integrated into a practical application because the additional steps of manipulating and analyzing
data does not add a meaningful limitation to the method because the prediction task appears to be
additional generation of data. The limitation "looping through multiple execution states within a
training epoch" does not amount to significantly more since the "looping" may simply be further
analyzing or manipulating of data.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1, 4-5, 10, 13-14, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over “Bin2vec: learning representations of binary executable programs for security tasks" to Arakelyan et al. (Arakelyan) in view of “Agent-based Graph Neural Networks" to Martinkus et al. (Martinkus) and in further view of “Implicit Graph Neural Networks” to Gu et al. (Gu).
Regarding claim 1, Arakelyan teaches a method at a computing device for vulnerability detection in software code, the method comprising: creating a node representation of the software code (Arakelyan [3.1 Program Graphs], e.g., We start by disassembling the binary program and constructing a control flow graph (CFG)); performing state transition and topology learning on the node representation (Arakelyan [3.2 Graph Convolutional Networks], e.g., For each node, it averages the features of that node with features of its neighbors. Features of different nodes are scaled differently in the process of averaging and these weights are learned, i.e. they are the parameters of the graph convolutional layer), the performing comprising: looping through multiple execution states within a training epoch (Arakelyan [3.2 Graph Convolutional Networks], e.g., For each node, it averages the features of that node with features of its neighbors. Features of different nodes are scaled differently in the process of averaging and these weights are learned, i.e. they are the parameters of the graph convolutional layer. After the averaging, each node is assigned the resulting vector as its new feature vector and we proceed to either apply a different graph convolutional layer, or compute the loss and perform backpropagation to update the parameters); sampling over the distribution of the multiple execution states (Arakelyan [3.2 Graph Convolutional Networks], e.g., To get the representation of the entire graph, we can aggregate the features of all nodes in the graph. Here it is possible to use any aggregation function - summation, averaging, or even a neural attention mechanism, but in our experiments we went for a simple sum aggregate) [and retrieving a maximum value]; [performing agent re-parameterization; capturing intermediate execution paths; selecting an execution path from the intermediate execution paths and generating a state-dependent adjacency matrix; using the state-dependent adjacency matrix and node representation with an implicit Graph Neural Network to find an equilibrium vector state; and using the equilibrium vector state to perform a prediction task]; and finding a vulnerability in the software code based on said performing (Arakelyan [Abstract], e.g., We introduce Bin2vec, a new approach leveraging Graph Convolutional Networks (GCN) along with computational program graphs in order to learn a high dimensional representation of binary executable programs. We demonstrate the versatility of this approach by using our representations to solve two semantically different binary analysis tasks– functional algorithm classification and vulnerability discover).
Arakelyan does not explicitly teach, but Martinkus teaches retrieving a maximum value (Martinkus [2.1 Graph classification with GNNs], e.g., While many graph coarsening and pooling approaches have been proposed, simple pooling functions such as sum, mean, or max followed by a linear layer or a Multilayer Perceptron (MLP) have proved to usually be the best choice); performing agent re-parameterization (Martinkus [3. AgentNet model], e.g., Now the agent has collected all possible information at the current node it is ready to make a transition to another node……… As we need a categorical sample for the next node position, we use the straight-through Gumbel softmax estimator); capturing intermediate execution paths (Martinkus [3. AgentNet model], e.g., Let’s now consider a Simplified AgentNet version of this model, where the agent can only decide if it prefers transitions to explored nodes, unexplored nodes, going back to the previous node, or staying on the current one); selecting an execution path from the intermediate execution paths and generating a state-dependent adjacency matrix (Martinkus [C.2 Model steps and possible extension], e.g., We use the resulting agent → node adjacency matrix for agent pooling in the node update step and for selecting the appropriate node in the agent update step, thus multiplying the 1s carrying the gradients with the correct embedding vectors);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to have modified the teachings of Arakelyan with the teachings of Martinkus with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make the modification for the benefit of training agents to systematically traverse graphs and find structures that are indistinguishable by 3-WL (Martinkus [Abstract], e.g., We show that the agents can learn to systematically explore their neighborhood and that AgentNet can distinguish some structures that are even indistinguishable by 3-WL. Moreover, AgentNet is able to separate any two graphs which are sufficiently different in terms of subgraphs).
Arakelyan and Martinkus do not explicitly teach, but Gu teaches using the state-dependent adjacency matrix and node representation with an implicit Graph Neural Network to find an equilibrium vector state (Gu [4 Implicit Graph Neural Network], e.g., an IGNN seeks the fixed point of equation (2b) that is trained to give the desired representation for the task……… IGNN models can generalize to heterogeneous networks with different adjacency matrices Ai and input features Ui for different relations……… The representation, given as the “internal state” X ∈ R m×n in the rest of the paper, is obtained as the fixed-point solution of the equilibrium equation (2b)); and using the equilibrium vector state to perform a prediction task (Gu [4 Implicit Graph Neural Network], e.g., The prediction rule (2a) computes the prediction Yˆ by feeding the state X through the output function fΘ……… The representation, given as the “internal state” X ∈ R m×n in the rest of the paper, is obtained as the fixed-point solution of the equilibrium equation (2b)).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to have modified the combined teachings of Arakelyan and Martinkus with the teachings of Gu with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make the modification for the benefit of obtaining a final representation with information from all neighbors of graph and a more efficient way to capture long-range dependencies in a graph (Gu [4.2 Implicit Graph Neural Network], e.g., Unlike most existing methods that iterate (1) for a finite number of steps, an IGNN seeks the fixed point of equation (2b) that is trained to give the desired representation for the task. Evaluation of fixed point can be regarded as iterating (1) for an infinite number of times to achieve a steady state. Thus, the final representation potentially contains information from all neighbors in the graph; [Abstract], e.g., Due to the finite nature of the underlying recurrent structure, current GNN methods may struggle to capture long-range dependencies in underlying graphs).
Regarding claim 4, most of the limitations of this claim have been noted in the rejection of claim 1. Arakelyan does not explicitly teach, but Martinkus teaches wherein the selecting the execution path is performed by a reinforcement agent at(st-1), where t is a layer and st-1 is a previous state (Martinkus Figure 1, e.g., AgentNet architecture. We have many neural agents walking the graph (a). Each agent at every step records information on the node, investigates its neighborhood, and makes a probabilistic transition to another neighbor (b). If the agent has walked a cycle (c) or a clique (d) it can notice; Note agents learn based on previous states as seen in Figure 1c and 1d).
The motivation to combine is the same as that of claim 1.
Regarding claim 5, most of the limitations of this claim have been noted in the rejection of claim 1. Arakelyan does not explicitly teach, but Martinkus teaches wherein the agent re-parameterization uses a Gumbel softmax algorithm (Martinkus [3 AgentNet model], e.g., As we need a categorical sample for the next node position, we use the straight-through Gumbel softmax estimator).
The motivation to combine is the same as that of claim 1.
Regarding claim 10, Arakelyan teaches A computing device configured for vulnerability detection in software code, the computing device comprising: a processor; and a memory; wherein the computing device is configured to: create a node representation of the software code (Arakelyan [3.1 Program Graph] We start by disassembling the binary program and constructing a control flow graph (CFG). We use static inter-procedural CFGs, which we construct using the angr library).
Arakelyan does not explicitly teach a computing device comprising a processor and memory, however, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Arakelyan to include a computing device comprising a processor and memory because disassembling a binary program is typically done through software as binary code is difficult for people to read and understand. Additionally, the angr library is an open source binary analysis platform for Python, which requires a processor and memory to run.
The rest of the claim 10 recites a computing device of the method of claim 1, and is similarly analyzed.
Regarding claim 13, the claim recites a device of the method of claim 4, and is similarly analyzed.
Regarding claim 14, the claim recites a device of the method of claim 5, and is similarly analyzed.
Regarding claim 19, Arakelyan teaches a non-transitory computer readable medium for storing instruction code, which, when executed by a processor of a computing device configured for vulnerability detection in software code, cause the computing device to: create a node representation of the software code (Arakelyan [3.1 Program Graph] We start by disassembling the binary program and constructing a control flow graph (CFG). We use static inter-procedural CFGs, which we construct using the angr library).
Arakelyan does not explicitly teach a computing device comprising a processor and memory, however, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Arakelyan to include a computing device comprising a process and memory because disassembling a binary program is typically done through software as binary code is difficult for people to read and understand. Additionally, the angr library is an open source binary analysis platform for Python, which requires a processor and memory to run.
The rest of claim 19 recites a computer readable medium of the method of claim 1, and is similarly analyzed.
Claim(s) 2 and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Arakelyan in view of Martinkus and Gu, and in further view of “VDGraph2Vec: Vulnerability Detection in Assembly Code using Message Passing Neural Networks” to Diwan et al. (Diwan).
Regarding claim 2, most of the limitations of this claim have been noted in the rejection of claim 1. Arakelyan further teaches wherein the creating the node representation comprises: transforming software code to assembly code (Arakelyan [3.1 Program Graphs], e.g., We start by disassembling the binary program); performing pre-processing and tokenization on the assembly code to create a Control Flow Graph (Arakelyan [3.1 Program Graphs], e.g., each basic block in CFG is executed linearly allows us to unfold the instructions within each basic block and represent them as a directed, computational tree……… To connect the trees in the forest we add Source and Sink nodes at the beginning and at the end of each basic block as a parent, or correspondingly a child……… resulting graphs are then connected following the same topology that basic blocks originally had in the CFG……… As a last step, we remove redundant edges and nodes……… After the graph construction is complete); [applying an embedding layer to increase subspace representation power; and computing block embedding using a maximum or average pooling along a time dimension].
Arakelyan, Martinkus, and Gu do not explicitly teach, but Diwan teaches applying an embedding layer to increase subspace representation power (Diwan [A Preliminaries 2) Word Embedding], e.g., Word2Vec [30] is an extremely popular algorithm in natural language processing to capture dense learned representations of text in such a way that words with the same meaning tend to have similar representations; [B. Model], e.g., We employ Word2Vec in our setting); and computing block embedding using a maximum or average pooling along a time dimension (Diwan [B. Model], e.g., We employ Word2Vec in our setting for learning the block embeddings by taking the average of all the instruction embeddings in the block).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to have modified the combined teachings of Arakelyan, Martinkus, and Gu with the teachings of Diwan with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make the modification for the benefit of effectively embedding control flow and semantic information of assembly code (Diwan [Abstract], e.g., In this research, we propose VDGraph2Vec, an automated deep learning method to generate representations of assembly code for the task of vulnerability detection. Previous approaches failed to attend to topological characteristics of assembly code while discovering the weakness in the software. VDGraph2Vec embeds the control flow and semantic information of assembly code effectively using the expressive capabilities of message passing neural networks and the RoBERTa model. Our model is able to learn the important features that help distinguish between vulnerable and non-vulnerable software).
Regarding claim 11, the claim recites a device of the method of claim 2, and is similarly analyzed.
Claim(s) 3 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Arakelyan in view of Martinkus, Gu and Diwan, and in further view of “DouBiGRU-A: Software defect detection algorithm based on attention mechanism and double BiGRU” to Zhao et al. (Zhao).
Regarding claim 3, most of the limitations of this claim have been noted in the rejection of claim 2. Arakelyan, Martinkus, and Gu do not explicitly teach, but Diwan teaches wherein the applying the embedding layer creates a [bi-directional] Gate Recurrent Unit (Diwan [A. Preliminaries 1) Message Passing Neural Networks] each node v is updated using the previous node state (htv) and the current message state (mvt+1) with a gated recurrent unit (GRU)).
The motivation to combine Arakelyan, Martinkus, and Gu with Diwan is the same as that of claim 2.
Arakelyan, Martinkus, Gu, and Diwan do not explicitly teach, but Zhao teaches the Gate Recurrent Unit being bi-directional (Zhao [3.3 Software vulnerability detection algorithm design], e.g., The BiGRU in the RNNs uses the same gate for forgetting and for selection memory).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to have modified the combined teachings of Arakelyan, Martinkus, Gu, and Diwan with the teachings of Zhao with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make the modification for the benefit of saving calculation cost, reducing computation time (Zhao [3.3 Software vulnerability detection algorithm design], e.g., The BiGRU in the RNNs uses the same gate for forgetting and for selection memory, which reduces the number of parameters and can save considerable calculation costs and reduce the required time (Li et al., 2018). In addition, BiGRU has stronger contextual logic relationship memory and screening capabilities than GRU. BiGRU can thus learn more features in the vulnerability code data set and consume fewer computing resources and less time) .
Regarding claim 12, the claim recites a device of the method of claim 3, and is similarly analyzed.
Claim(s) 6 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Arakelyan in view of Martinkus, and Gu, and in further view of “Equilibrium Approaches to Modern Deep Learning” to Bai (Bai).
Regarding claim 6, most of the limitations of this claim have been noted in the rejection of claim 1. Arakelyan does not explicitly teach, but Martinkus teaches [wherein the prediction task comprises: performing layer normalization on the equilibrium vector state]; using global average pooling to obtain a graph representation (Martinkus [2.1 Graph Classification with GNNs], e.g., While many graph coarsening and pooling approaches have been proposed, simple pooling functions such as sum, mean, or max); and computing a linear transformation on the graph representation to complete the prediction task (Martinkus [2.1 Graph Classification with GNNs], e.g., To classify the resulting graph as a whole, a readout step is needed to aggregate all of the node embeddings……… followed by a linear layer).
The motivation to combine Arakelyan and Martinkus is the same as that of claim 1.
Arakelyan, Martinkus, and Gu do not explicitly teach, but Bai teaches wherein the prediction task comprises: performing layer normalization on the equilibrium vector state (Bai [4.2.4 Brittleness to Architectural Choices], e.g., For example, the largest-scale DEQs [18, 19] that we have presented so far in this thesis all had normalizations [13, 246] at the end of the layer to constrain the output range; Note the output of a DEQ is a fixed point, which is the stable state of a transformation).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to have modified the combined teachings of Arakelyan, Martinkus, and Gu with the teachings of Bai with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make the modification for the benefit of constraining and stabilizing the deep equilibrium model (Bai [3.2 Integration with Other Deep Learning Techniques], e.g., Layer normalization of hidden activations in fθ played an important role in constraining the output and stabilizing DEQs on sequences [18]).
Regarding claim 15, the claim recites a device of the method of claim 6, and is similarly analyzed.
Claim(s) 7 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Arakelyan in view of Martinkus, Gu, and Bai, and in further view of “Multi-context Attention Fusion Neural Network for Software Vulnerability Identification” to Tanwar et al. (Tanwar)
Regarding claim 7, most of the limitations of this claim have been noted in the rejection of claim 6. Arakelyan, Martinkus, Gu, and Bai do not explicitly teach, but Tanwar teaches finding labels and losses from the linear transformation (Tanwar [4.3. AST Path Sequence Decoder], e.g., The feature maps are further propagated down multiple linear layers that sequentially funnel salient features for classification. The final feature tensor 𝑦̂ 𝜖 ℝ2 × 1 is softmax activated and optimized against cross-entropy loss for target label 𝑦, as given in Eq. 9).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to have modified teachings of Arakelyan, Martinkus, Gu, and Bai with the teachings of Tanwar with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make the modification for the benefit efficiently detecting code vulnerability and pinpointing the code sections with vulnerabilities (Tanwar [Abstract], e.g., Utilizing the code AST structure, our model builds an accurate understanding of code semantics with a lot less learnable parameters. Besides a novel way of efficiently detecting code vulnerability, an additional novelty in this model is to exactly point to the code sections, which were deemed vulnerable by the model. Thus helping a developer to quickly focus on the vulnerable code sections; and this becomes the “explainable” part of the vulnerability detection).
Regarding claim 16, the claim recites a device of the method of claim 7, and is similarly analyzed.
Claim(s) 8 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Arakelyan in view of Martinkus and Gu, and in further view of “Efficient and Scalable Implicit Graph Neural Networks with Virtual Equilibrium” to Chen et al. (Chen).
Regarding claim 8, most of the limitations of this claim have been noted in the rejection of claim 1. Arakelyan, Martinkus, and Gu do not explicitly teach, but Chen teaches training of the implicit Graph Neural Network with a backward pass (Chen [Training Inefficiency], e.g., During each update of model parameters, we have to solve two fixed-point equations (Eq. (4) in the forward pass and Eq. (7) in the backward pass)), the training comprising providing a gradient at the equilibrium vector state to the node representation (Chen [Training Inefficiency], e.g., During each update of model parameters, we have to solve two fixed-point equations (Eq. (4) in the forward pass and Eq. (7) in the backward pass), [C. Theoretical Analysis], e.g., As described before, in the backward pass, we back-propagate through K steps of fixed-point iterations and take the gradient as an estimate of the exact value. We denote ∇θzK as the estimated gradient obtained from backpropagation; Note that backpropagation is a calculation of the loss gradient and the backpropagation is used to update the model parameters).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to have modified the combined teachings of Arakelyan, Martinkus, and Gu with the teachings of Chen with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make the modification for the benefit of improving efficiency and scalability (Chen [Section I. Introduction], e.g., The core idea of VEQ is to recycle the equilibrium calculated from previous model updates. Utilizing them as an informative prior, we can avoid finding equilibrium from scratch each time and enable mini-batch GEQ training. On the one hand, since parameters of GEQs only change slightly between model updates, the previous equilibrium is close to the current one, which largely reduces the iteration number needed for root-finding………. As a result, VEQ could be both efficient (with only a few root-finding iterations) and scalable (with only mini-batch nodes and their 1-hop neighbors)).
Regarding claim 17, the claim recites a device of the method of claim 8, and is similarly analyzed.
Claim(s) 9 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Arakelyan in view of Martinkus and Gu and in further view of Tanwar.
Regarding claim 9, most of the limitations of this claim have been noted in the rejection of claim 1. Arakelyan, Martinkus and Gu do not explicitly teach, but Tanwar teaches wherein the training epoch contains a full iteration of an execution session corresponding to a concrete execution path (Tanwar [4. Proposed Work], e.g., Features to train the model are derived as a set of concrete paths running between leaf nodes in the code Abstract Syntax Tree. The path nodes along with the terminal end nodes constitute a path context. By traversing the AST from left to right, such path contexts can be mined and framed into a sequence).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to have modified teachings of Arakelyan, Martinkus and Gu with the teachings of Tanwar with reasonable expectation of success. One of ordinary skill in the art would have been motivated to make the modification for the benefit efficiently detecting code vulnerability and pinpointing the code sections with vulnerabilities (Tanwar [Abstract], e.g., Utilizing the code AST structure, our model builds an accurate understanding of code semantics with a lot less learnable parameters. Besides a novel way of efficiently detecting code vulnerability, an additional novelty in this model is to exactly point to the code sections, which were deemed vulnerable by the model. Thus helping a developer to quickly focus on the vulnerable code sections; and this becomes the “explainable” part of the vulnerability detection).
Regarding claim 18, the claim recites a device of the method of claim 9, and is similarly analyzed.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20220244953 A1 to Ji et al. discloses generating a control flow graph from a target binary code and inputting the target binary code into a graph neural network to compare the target binary code with a comparing binary.
US 20240354424 A1 to Najafirad et al., which has a priority date of 4/21/2023, discloses a sample program undergoing tokenization. Each token represents one of poacher flow edge, data flow edge, control flow edge, or a sequential flow edge. A RoBERTa layer generates embeddings for each token/node and a graph convolution network layer takes the node embeddings and adjacency matrix for feature generation.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LAWRENCE Q TRUONG whose telephone number is (571)272-6973. The examiner can normally be reached Monday - Friday, 7:30 am - 5 pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kambiz Zand can be reached at (571) 272-3811. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LAWRENCE Q TRUONG/Examiner, Art Unit 2434 /ALI SHAYANFAR/Supervisory Patent Examiner, Art Unit 2434