Prosecution Insights
Last updated: April 19, 2026
Application No. 17/602,372

APPARATUS AND METHOD FOR OBTAINING VULNERABLE TRANSACTION SEQUENCE IN SMART CONTRACT

Final Rejection §103§112
Filed
Oct 08, 2021
Examiner
VU, TAYLOR P
Art Unit
2437
Tech Center
2400 — Computer Networks
Assignee
Korea University Research And Business Foundation
OA Round
6 (Final)
81%
Grant Probability
Favorable
7-8
OA Rounds
3y 3m
To Grant
94%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
21 granted / 26 resolved
+22.8% vs TC avg
Moderate +13% lift
Without
With
+12.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
30 currently pending
Career history
56
Total Applications
across all art units

Statute-Specific Performance

§101
12.3%
-27.7% vs TC avg
§103
72.0%
+32.0% vs TC avg
§102
2.2%
-37.8% vs TC avg
§112
12.5%
-27.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 26 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments The present office action is responsive to communications received on 10/31/2025. Claims 1 and 8 have been amended. Claims 1-14 are pending. Applicant’s arguments filed on 10/31/2025 with regards to 35 USC 112(a) and (b), as seen on page 6, have been fully considered and are not fully persuasive. Examiner still asserts that it is clear how the vulnerable transaction sequence is abstracted. The Applicant merely amended how a cost function is derived. Additionally, Applicant’s arguments, as seen pages 6-12 , with regards to 35 USC 103 over Fang et al. (US PGPub No. 20210110047 A1) in view of Church (US PGPub No. 20070168469-A1), Arai et al. (US PGPub No. 20200074468 A1, Maeda et al. (US PGPub No. 20120151449-A1), and Reddy et al. (US PGPub No. 20190303623-A1) with respects claim 1, 8 and their dependents, specifically with the amended limitations have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new grounds of rejection is made with Luu et al. (“Making Smart Contracts Smarter”, Year: 2016) in view of He et al. (“Learning to Fuzz from Symbolic Execution with Application to Smart Contracts”, Year: 2019), Feng et al. (“Precise attack synthesis for smart contracts”, Year: 2019), and Akca et al. ("SolAnalyser: A Framework for Analysing and Testing Smart Contracts", Year 2019). The office action has been updated reflecting the claims as currently presented. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim 1-14 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Regarding claims 1 and 8: “…determining the cost function using an abstracted vulnerable transaction sequence…” is recited in claims 1 and 8. The specifications does not reasonably convey to those skilled in the art that the inventors were in possession of full scope of this limitation at the time of the filing what types of language models are encompassed by the recited “plural types of language models” and how their outputs are used as weights. The only related passage, on page 26 of the specification, states that “… the language model is not limited thereto and the designer may consider and apply various types of language models to determine vulnerable sequence candidate search order...”, which is a generic statement and does not describe concrete plural model types or weighting schemes. Furthermore, the specification does not describe how the outputs of such plural types of language models would be combined or how the resulting weights would be applied to abstract the vulnerable transaction sequence. “…each of the language models being respectively trained on program-level semantic tokens and transaction-level contextual embeddings…” is also recited in claims 1 and 8. The specification does not disclose any language models trained on “program-level semantic tokens” or “transaction-level contextual embeddings,” nor does it define these terms or provide rules, algorithms, or procedures for obtaining such tokens or embeddings. While the specification describes a statistical language model over abstracted transaction sequences, it does not describe training multiple language models on the particular data domains now recited in the claims. These limitations were first introduced by amendment and therefore represent a broadening of the originally disclosed language-model concept (a single n-gram–type model over abstracted transaction sequences) to specific multi-model and multi-domain training arrangements that are not supported by the as-filed disclosure. It is further noted that the specific recitations of (i) “plural types of language models” whose outputs are used as weights in the claimed cost function and (ii) “each of the language models being respectively trained on program-level semantic tokens and transaction-level contextual embeddings” were first introduced into independent claims 1 and 8 by amendment filed on October 31, 2025. These amendments broaden the originally disclosed concept of a single statistical language model (e.g., an n-gram model) defined over abstracted transaction sequences to instead encompass multiple, heterogeneous language models trained on distinct data domains and combined via weighting. The as-filed specification does not describe any such plural model architecture, does not define “program-level semantic tokens” or “transaction-level contextual embeddings,” and does not disclose any algorithm or procedure for obtaining these particular data representations or training language models on them. Claims 2-7 and 9-14 do not overcome the rejection of their respective base claims that have been rejected above, and therefore under the same grounds provided to claims 1 and 8. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-14 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claims 1 and 8: “…determining the cost function derived from at least one abstracted transaction sequence stored in memory and weighted by outputs…” Indefinite because the claim does not recite how the abstracted vulnerable transaction sequence is derived, nor it is clear how the vulnerable transaction sequence is abstracted. Further it is unclear how abstracted vulnerable transaction sequence are weighted by the outputs of the plural types of language models nor it is clear how the weights are obtained. “…program-level semantic tokens and transaction-level contextual embeddings…” Indefinite because the claim does not recite how the program-level semantic tokens and transaction-level contextual embeddings are derived. A person of ordinary skill in the art would not know how the program-level semantic tokens and transaction-level contextual embeddings would be obtained. “…obtained from symbolic execution statistics…” Indefinite because the claim does not recite how the symbolic execution are derived. A person of ordinary skill in the art would not know how to define symbolic execution statistics. Claims 2-7 and 9-14 do not overcome the rejections of their respective base claims that have been rejected above, and therefor rejected under the same grounds provide claims 1 and 8. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claims 1-14 are rejected under 35 U.S.C. 103 as being unpatentable over Luu et al. (“Making Smart Contracts Smarter”, Year: 2016) in view of He et al. (“Learning to Fuzz from Symbolic Execution with Application to Smart Contracts”, Year: 2019), Feng et al. (“Precise attack synthesis for smart contracts”, Year: 2019), and Akca et al. ("SolAnalyser: A Framework for Analysing and Testing Smart Contracts", Year 2019) . With respect to claim 1, Luu teaches a processor-implemented method of obtaining a vulnerable transaction sequence, the method comprising: (Abstract - Introduction, Page 254- 255: As a refinement, we propose ways to enhance the operational semantics of Ethereum to make contracts less vulnerable. For developers writing contracts for the existing Ethereum system, we build a symbolic execution tool called OYENTE to find potential security bugs… In this paper, we document several new security flaws of Ethereum smart contracts and give examples of real-world instances for each problem. These security flaws make contracts susceptible to abuse by several parties (including miners and contracts’ users). We believe that these flaws arise in practice because of a semantic gap between the assumptions contract writers make about the underlying execution semantics and the actual semantics of the smart contract system.) obtaining a verification condition by performing the symbolic execution on the vulnerable transaction candidate to generate the verification condition representing execution-path constraints; (5 The OYENTE Tool, Page 262: Our analysis tool is based upon symbolic execution [31]. Symbolic execution represents the values of program variables as symbolic expressions of the input symbolic values. Each symbolic path has a path condition which is a formula over the symbolic inputs built by accumulating constraints which those inputs must satisfy in order for execution to follow that path. A path is infeasible if its path condition is unsatisfiable. Otherwise, the path is feasible. ); determining the vulnerable transaction sequence candidate as the vulnerable transaction sequence when the verification condition is satisfiable; (5.2 Implementation, Page 563: We implement Oyente in Python with roughly 4, 000 lines of code. Currently, we employ Z3 [33] as our solver to decide satisfiability. Oyente faithfully simulates Ethereum Virtual Machine (EVM) code which has 64 distinct instructions in its language. Oyente is able to detect all the three security problems discussed in Section 3. We describe each component below. Core Analysis. CoreAnalysis contains sub-components to detect contracts which are TOD, timestamp-dependent or mishandled exceptions. Currently, the Explorer collects only paths which exhibit distinct flows of Ether. Thus, we detect if a contract is TOD if it sends out Ether differently when the order of transactions changes.); Luu does not disclose: selecting a vulnerable transaction sequence candidate in at least one program using a cost function, including applying plural types of language models to determine a vulnerable sequence candidate search order and determining the cost function derived from a least one abstracted vulnerable transaction sequence stored in memory and weighted by outputs of the plural types of the language models, each of the language models being respectively trained on program-level semantic tokens and transaction-level contextual embeddings, the calculated cost function configured to guide a search order for symbolic execution; However, He teaches selecting a vulnerable transaction sequence candidate in at least one program using a cost function, including applying plural types of language models to determine a vulnerable sequence candidate search order and (2.3 The ILF Fuzzer, Page 534: Fuzzing Policy. We show the learned fuzzing policy in Figure 3. As an input, the policy receives semantic information derived from the contract under test and the history of executed transactions so far. It outputs a probability distribution over a set of candidate transactions which can be used to extend the sequence of transactions. The transactions generated before selecting ti are shown in the top-left table in Figure 3. For example, transaction ti−1 is a call to function setPhase(1), sent by the user with address 0x10 and 0 ether. Based on this sequence, ILF derives a feature vector for each function f of the contract (invest, setPhase, etc.). determining the cost function derived from a least one abstracted vulnerable transaction sequence stored in memory and weighted by outputs of the plural types of the language models, (4.2 Neural Network-based Policy, Page 537: Learning from Expert. To train πnn, we assume a dataset D containing sequences [(α(F,ti),ti)] where α(F,ti) is the extracted feature representation and ti is a transaction produced by the symbolic execution expert at step i... We compute the cross-entropy loss based on the probabilities of choosing the same function, arguments, sender and amount as in ti . Then, we execute ti and update the hidden state of the GRU based on ti. After re-executing all transactions in the sequence, we perform back-propagation on the loss and jointly update weights of all networks. As standard, we train on batches of sequences and stop when all networks reach high classification accuracy); each of the language models being respectively trained on program-level semantic tokens and transaction-level contextual embeddings, the calculated cost function configured to guide a search order for symbolic execution; (5.1 Feature Representation of Functions, Page 537: In Table 2 we present semantic features for function which we use as an input to neural networks. Each feature is derived for a given function f and history of transactions t. The last three features characterize properties of f. In order to select effective function arguments x, we define feature Arguments. Feature Opcodes returns a vector with counts of 50 distinctive opcodes in f , measuring the functionality and complexity of f . To select the 50 distinctive opcodes, we excluded commonly used opcodes, such as those used for arithmetic and stack operations. The 50 opcodes are listed in Appendix A. Feature Name encodes the function’s name. To derive this feature, we tokenize f ’s name into separate sub-tokens, according to camelCase and pascal_case; cf. [5], and then map each sub-token into word2vec word embedding [41]); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of He regarding applying natural language models to the method of Luu in order to improve the coverage or expose new vulnerabilities (e.g., He: 1 Introduction, Page 531 ). Luu in view of He does not disclose: simplifying the verification condition, including or replacing some portions of the verification condition by pruning unsatisfiable sub-formulae or substituting constant expressions obtained from symbolic execution statistics; verifying whether the simplified verification condition is satisfiable when the vulnerable transaction sequence is not found as a verification result about the verification condition; determining the vulnerable transaction sequence candidate as the vulnerable transaction sequence when the verification condition is satisfiable; However, Feng teaches simplifying the verification condition, including or replacing some portions of the verification condition by pruning unsatisfiable sub-formulae or substituting constant expressions obtained from symbolic execution statistics; (VI. Implementation, Page 10: C. SMT-based Early Pruning In addition to hoisting, we also design a simple but effective early pruning strategy that allows SMARTSCOPY to prune infeasible symbolic programs before executing them. The intuition behind our early pruning strategy is that all attacks expressible in SMARTSCOPY (e.g., [7], [8], [9]) invoke at least one public method that manipulates persistent storage and at least one public method that transfers cryptocurrency using the call instruction.); verifying whether the simplified verification condition is satisfiable when the vulnerable transaction sequence is not found as a verification result about the verification condition; (VI Implementation, Page 10: SMARTSCOPY helps automate this process by searching for attacks that exploit a given vulnerability in victim contract. In other words, a successful attack executes at least one store instruction followed by at least one call instruction. We express our early pruning strategy using the following ROSETTE program…This procedure queries the solver to find out if the given symbolic program p contains any concrete attack program that executes a call after a store. This query is much faster to solve than a vulnerability query, so if p contains no feasible candidate, SMARTSCOPY does not run the vulnerability query for it. ); mapping, when the verification condition is satisfiable, an assertion label identifying the corresponding symbolic path to the vulnerable transaction candidate; and (VI Implementation, Page 9: A. Symbolic Computation Using ROSETTE SMARTSCOPY leverages ROSETTE [17] to symbolically search for attack programs. ROSETTE is a programming language that provides facilities for symbolic evaluation. These facilities are based on three constructs: assertions, symbolic values, and (satisfiability) queries…Figure 9 shows the implementation of SMARTSCOPY in Rosette. The tool takes as input a vulnerability specification V, the components Υ of a victim program, and a bound K on the length of the attack program. Given these inputs, lines 2–4 use Υ to construct a symbolic attack program of length K. Next, lines 5–13 run the victim’s initialization code to obtain the initial program state, i-pstate, for the attack. Then, line 14 evaluates the symbolic attack program on the initial state to obtain a symbolic output state, o-pstate. Finally, lines 15-16 use the solve query to search for a concrete attack program that satisfies the vulnerability assertion.); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Feng regarding simplifying the verification condition through elimination to the method of Luu in view of He in order to improve the scalability and precision while maintaining still permitting inputs (e.g., Feng: 1 Introduction, Page 2). Luu in view of He and Feng does not disclose: generating and outputting, through an outputter including a display or a speaker, a vulnerability report including the assertion label and the vulnerable transaction sequence candidate as the vulnerable transaction sequence. However, Akca teaches generating and outputting, through an outputter including a display or a speaker, a vulnerability report including the assertion label and the vulnerable transaction sequence candidate as the vulnerable transaction sequence. (VI Approach and Implementation, Page 5: Phase 3: Execution and analysis of instrumented contracts. The instrumented Solidity code with property assertions is executed in Ethereum Virtual Machine (EVM) [34]. We use the inputs created in Phase 2 to execute the contract so that each transaction and function within the contract is invoked at least once. We implemented the ExecutionValidator in nodejs–v8.11.3–as an extension to EVM to analyse the execution trace produced by EVM and to report triggered vulnerabilities, if any. We use five different external node modules (ethereumjs-vm, ethjs-signer, ethjs, crypto-js, and web3) to combine EVM and execution trace analysis in the ExecutionValidator ); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Akca regarding generating a vulnerability report which includes vulnerable transaction sequence candidate and assertion label to the method of Luu in view of He and Feng in order to reduce to reduce false positives and provide better support (e.g., Akca: 1 Introduction 2-3). With respect to claim 2, the combination of Luu in view of He, Feng, and Akca teaches the method of claim 1 (see rejection of claim 1 above) further comprising: obtaining the set of vulnerable transaction sequences; abstracting the vulnerable transaction sequence in the set of vulnerable transaction sequences. (e.g., Luu: 5.2 Implementation, Page 263: At the end of the exploration phase, we produce a set of symbolic traces. Each trace is associated with a path constraint and auxiliary data that the analyses in later phase require. The employment of a constraint solver, Z3 in particular, helps us eliminate provably infeasible traces from consideration.). With respect to claim 3, the combination of Luu in view of He, Feng, and Akca teaches the method of claim 1 (see rejection of claim 1 above) further comprising: simplifying the verification condition when the verification condition is obtained and before performing a verification using the verification condition. (Feng: VI. Implementation, Page 10: C. SMT-based Early Pruning In addition to hoisting, we also design a simple but effective early pruning strategy that allows SMARTSCOPY to prune infeasible symbolic programs before executing them. The intuition behind our early pruning strategy is that all attacks expressible in SMARTSCOPY (e.g., [7], [8], [9]) invoke at least one public method that manipulates persistent storage and at least one public method that transfers cryptocurrency using the call instruction.); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Feng regarding simplifying the verification condition through elimination to the method of Luu in view of He and Akca in order to improve the scalability and precision while maintaining still permitting inputs (e.g., Feng: 1 Introduction, Page 2). With respect to claim 5, the combination of Luu in view of He, Feng, and Akca teaches the method of claim 1 (see rejection of claim 1 above) further comprising: unrolling a loop in the at least one program or inlining a function to be called at each call site in the at least one program. (e.g., Luu, 5.2 Implementation, Page 263: Reentrancy Detection. We make use of path conditions in order to check for reentrancy vulnerability. At each CALL that is encountered, we obtain the path condition for the execution before the CALL is executed. We then check if such condition with updated variables (e.g., storage values) still holds (i.e., if the call can be executed again). If so, we consider this a vulnerability, since it is possible for the callee to re-execute the call before finishing it.). With respect to claim 4, the combination of Luu in view of He, Feng, and Akca teaches the method of claim 3 (see rejection of claim 3 above) wherein the simplifying of the verification condition comprises at least one of: excluding logical formulas irrelevant to verification from the verification condition; and performing quantifier elimination optimization. (e.g., Feng: VI Implementation, Page 10: Early pruning strategy includes using the ROSETTE program in which the procedure queries the solver to find out if the given symbolic program p contains any concrete attack program that executes a call after a store. This query is much faster to solve than a vulnerability query, so if p contains no feasible candidate, SMARTSCOPY does not run the vulnerability query for i ). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Feng regarding the quantifier elimination optimization to the method of Luu in view of He and Akca in order to enable faster synthesis (e.g., Feng: VI Implementation, Page 10). With respect to claim 6, the combination of Luu in view of He, Feng, and Akca teaches the method of claim 1 (see rejection of claim 1 above) wherein the at least one program comprises a smart contract. (e.g., Luu: 1 Introduction, Page 255: Oyente is a symbolic execution tool exclusively designed to analyze Ethereum smart contracts. It follows the execution model of Ethereum smart contracts [15] and directly works with Ethereum virtual machine (EVM) byte code without access to the high level representation (e.g., Solidity [16], Serpent [17]).). With respect to claim 7, the combination of Luu in view of He, Feng, and Akca teaches the method of claim 1 (see rejection of claim 1 above) wherein the obtaining of the verification condition by performing the symbolic execution over the vulnerable transaction sequence candidate comprises obtaining the verification condition using a verification condition generating function for the vulnerable transaction sequence candidate, (e.g., Luu: 5.2 Implementation, Page 263: CFG Builder. CFGBuilder builds a skeletal control flow graph which contains all the basic blocks as nodes, and some edges representing jumps of which the targets can be determined by locally investigating the corresponding source nodes. However, some edges cannot be determined statically at this phase, thus they are constructed on the fly during symbolic execution in the later phase); and the verification condition generating function uses a symbolic executor for a transaction sequence based on a symbolic executor for a transaction. (e.g., Luu: 5.2 Implementation, Page 263: At the end of the exploration phase, we produce a set of symbolic traces. Each trace is associated with a path constraint and auxiliary data that the analyses in later phase require. The employment of a constraint solver, Z3 in particular, helps us eliminate provably infeasible traces from consideration.). With respect to claim 8, Fang teaches an apparatus for obtaining a vulnerable transaction sequence, the apparatus comprising: a storage configured to non-transitorily store at least one program; and a processor configured to: receive the at least one program, selecting a vulnerable transaction sequence candidate in at least one program using a cost function, (Abstract - Introduction, Page 254- 255: As a refinement, we propose ways to enhance the operational semantics of Ethereum to make contracts less vulnerable. For developers writing contracts for the existing Ethereum system, we build a symbolic execution tool called OYENTE to find potential security bugs… In this paper, we document several new security flaws of Ethereum smart contracts and give examples of real-world instances for each problem. These security flaws make contracts susceptible to abuse by several parties (including miners and contracts’ users). We believe that these flaws arise in practice because of a semantic gap between the assumptions contract writers make about the underlying execution semantics and the actual semantics of the smart contract system.) obtain a verification condition by performing the symbolic execution on the vulnerable transaction candidate to generate the verification condition representing execution-path constraints; (5 The OYENTE Tool, Page 262: Our analysis tool is based upon symbolic execution [31]. Symbolic execution represents the values of program variables as symbolic expressions of the input symbolic values. Each symbolic path has a path condition which is a formula over the symbolic inputs built by accumulating constraints which those inputs must satisfy in order for execution to follow that path. A path is infeasible if its path condition is unsatisfiable. Otherwise, the path is feasible. ); determine the vulnerable transaction sequence candidate as the vulnerable transaction sequence when the verification condition is satisfiable; (5.2 Implementation, Page 563: We implement Oyente in Python with roughly 4, 000 lines of code. Currently, we employ Z3 [33] as our solver to decide satisfiability. Oyente faithfully simulates Ethereum Virtual Machine (EVM) code which has 64 distinct instructions in its language. Oyente is able to detect all the three security problems discussed in Section 3. We describe each component below. Core Analysis. CoreAnalysis contains sub-components to detect contracts which are TOD, timestamp-dependent or mishandled exceptions. Currently, the Explorer collects only paths which exhibit distinct flows of Ether. Thus, we detect if a contract is TOD if it sends out Ether differently when the order of transactions changes.); Luu does not disclose: including applying plural types of language models to determine a vulnerable sequence candidate search order and determining the cost function derived from a least one abstracted vulnerable transaction sequence stored in memory and weighted by outputs of the plural types of the language models, each of the language models being respectively trained on program-level semantic tokens and transaction-level contextual embeddings, the calculated cost function configured to guide a search order for symbolic execution; However, He teaches including applying plural types of language models to determine a vulnerable sequence candidate search order and (2.3 The ILF Fuzzer, Page 534: Fuzzing Policy. We show the learned fuzzing policy in Figure 3. As an input, the policy receives semantic information derived from the contract under test and the history of executed transactions so far. It outputs a probability distribution over a set of candidate transactions which can be used to extend the sequence of transactions. The transactions generated before selecting ti are shown in the top-left table in Figure 3. For example, transaction ti−1 is a call to function setPhase(1), sent by the user with address 0x10 and 0 ether. Based on this sequence, ILF derives a feature vector for each function f of the contract (invest, setPhase, etc.); determining the cost function derived from a least one abstracted vulnerable transaction sequence stored in memory and weighted by outputs of the plural types of the language models, (4.2 Neural Network-based Policy, Page 537: Learning from Expert. To train πnn, we assume a dataset D containing sequences [(α(F,ti),ti)] where α(F,ti) is the extracted feature representation and ti is a transaction produced by the symbolic execution expert at step i... We compute the cross-entropy loss based on the probabilities of choosing the same function, arguments, sender and amount as in ti . Then, we execute ti and update the hidden state of the GRU based on ti. After re-executing all transactions in the sequence, we perform back-propagation on the loss and jointly update weights of all networks. As standard, we train on batches of sequences and stop when all networks reach high classification accuracy); each of the language models being respectively trained on program-level semantic tokens and transaction-level contextual embeddings, the calculated cost function configured to guide a search order for symbolic execution; (5.1 Feature Representation of Functions, Page 537: In Table 2 we present semantic features for function which we use as an input to neural networks. Each feature is derived for a given function f and history of transactions t. The last three features characterize properties of f. In order to select effective function arguments x, we define feature Arguments. Feature Opcodes returns a vector with counts of 50 distinctive opcodes in f , measuring the functionality and complexity of f . To select the 50 distinctive opcodes, we excluded commonly used opcodes, such as those used for arithmetic and stack operations. The 50 opcodes are listed in Appendix A. Feature Name encodes the function’s name. To derive this feature, we tokenize f ’s name into separate sub-tokens, according to camelCase and pascal_case; cf. [5], and then map each sub-token into word2vec word embedding [41]); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of He regarding applying natural language models to the method of Luu in order to improve the coverage or expose new vulnerabilities (e.g., He: 1 Introduction, Page 531 ). Luu in view of He does not disclose: simplify the verification condition, including or replacing some portions of the verification condition by pruning unsatisfiable sub-formulae or substituting constant expressions obtained from symbolic execution statistics; verify whether the simplified verification condition is satisfiable when the vulnerable transaction sequence is not found as a verification result about the verification condition; However, Feng teaches simplify the verification condition, including or replacing some portions of the verification condition by pruning unsatisfiable sub-formulae or substituting constant expressions obtained from symbolic execution statistics; (VI. Implementation, Page 10: C. SMT-based Early Pruning In addition to hoisting, we also design a simple but effective early pruning strategy that allows SMARTSCOPY to prune infeasible symbolic programs before executing them. The intuition behind our early pruning strategy is that all attacks expressible in SMARTSCOPY (e.g., [7], [8], [9]) invoke at least one public method that manipulates persistent storage and at least one public method that transfers cryptocurrency using the call instruction.); verify whether the simplified verification condition is satisfiable when the vulnerable transaction sequence is not found as a verification result about the verification condition; (VI Implementation, Page 10: SMARTSCOPY helps automate this process by searching for attacks that exploit a given vulnerability in victim contract. In other words, a successful attack executes at least one store instruction followed by at least one call instruction. We express our early pruning strategy using the following ROSETTE program…This procedure queries the solver to find out if the given symbolic program p contains any concrete attack program that executes a call after a store. This query is much faster to solve than a vulnerability query, so if p contains no feasible candidate, SMARTSCOPY does not run the vulnerability query for it. ); mapping, when the verification condition is satisfiable, an assertion label identifying the corresponding symbolic path to the vulnerable transaction candidate; and(VI Implementation, Page 9: A. Symbolic Computation Using ROSETTE SMARTSCOPY leverages ROSETTE [17] to symbolically search for attack programs. ROSETTE is a programming language that provides facilities for symbolic evaluation. These facilities are based on three constructs: assertions, symbolic values, and (satisfiability) queries…Figure 9 shows the implementation of SMARTSCOPY in Rosette. The tool takes as input a vulnerability specification V, the components Υ of a victim program, and a bound K on the length of the attack program. Given these inputs, lines 2–4 use Υ to construct a symbolic attack program of length K. Next, lines 5–13 run the victim’s initialization code to obtain the initial program state, i-pstate, for the attack. Then, line 14 evaluates the symbolic attack program on the initial state to obtain a symbolic output state, o-pstate. Finally, lines 15-16 use the solve query to search for a concrete attack program that satisfies the vulnerability assertion.); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Feng regarding simplifying the verification condition through elimination to the method of Luu in view of He in order to improve the scalability and precision while maintaining still permitting inputs (e.g., Feng: 1 Introduction, Page 2). Luu in view of He and Feng does not disclose: generate and output, through an outputter including a display or a speaker, a vulnerability report including the assertion label and the vulnerable transaction sequence candidate as the vulnerable transaction sequence. However, Akca teaches generate and output, through an outputter including a display or a speaker, a vulnerability report including the assertion label and the vulnerable transaction sequence candidate as the vulnerable transaction sequence. (VI Approach and Implementation, Page 5: Phase 3: Execution and analysis of instrumented contracts. The instrumented Solidity code with property assertions is executed in Ethereum Virtual Machine (EVM) [34]. We use the inputs created in Phase 2 to execute the contract so that each transaction and function within the contract is invoked at least once. We implemented the ExecutionValidator in nodejs–v8.11.3–as an extension to EVM to analyse the execution trace produced by EVM and to report triggered vulnerabilities, if any. We use five different external node modules (ethereumjs-vm, ethjs-signer, ethjs, crypto-js, and web3) to combine EVM and execution trace analysis in the ExecutionValidator ); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Akca regarding generating a vulnerability report which includes vulnerable transaction sequence candidate and assertion label to the method of Luu in view of He and Feng in order to reduce to reduce false positives and provide better support (e.g., Akca: 1 Introduction 2-3). With respect to claim 9, the combination of Luu in view of He, Feng, and Akca teaches the method of claim 8 (see rejection of claim 8 above) but does not disclose wherein the processor is configured to abstract the vulnerable transaction sequence in the set of vulnerable transaction sequences. (e.g., Luu: 5.2 Implementation, Page 263: At the end of the exploration phase, we produce a set of symbolic traces. Each trace is associated with a path constraint and auxiliary data that the analyses in later phase require. The employment of a constraint solver, Z3 in particular, helps us eliminate provably infeasible traces from consideration.). With respect to claim 10, the combination of Luu in view of He, Feng, and Akca teaches the method of claim 8 (see rejection of claim 8 above) wherein the processor is configured to simplify the verification condition when the verification condition is obtained and before performing a verification using the verification condition. (Feng: VI. Implementation, Page 10: C. SMT-based Early Pruning In addition to hoisting, we also design a simple but effective early pruning strategy that allows SMARTSCOPY to prune infeasible symbolic programs before executing them. The intuition behind our early pruning strategy is that all attacks expressible in SMARTSCOPY (e.g., [7], [8], [9]) invoke at least one public method that manipulates persistent storage and at least one public method that transfers cryptocurrency using the call instruction.); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Feng regarding simplifying the verification condition through elimination to the method of Luu in view of He and Akca in order to improve the scalability and precision while maintaining still permitting inputs (e.g., Feng: 1 Introduction, Page 2). With respect to claim 11, the combination of Luu in view of He, Feng, and Akca teaches the method of claim 10 (see rejection of claim 10 above) wherein the processor is configured to simplify the verification condition by performing at least one of exclusion of logical formulas irrelevant to verification from the verification condition and quantifier elimination optimization. (e.g., Feng: VI Implementation, Page 10: Early pruning strategy includes using the ROSETTE program in which the procedure queries the solver to find out if the given symbolic program p contains any concrete attack program that executes a call after a store. This query is much faster to solve than a vulnerability query, so if p contains no feasible candidate, SMARTSCOPY does not run the vulnerability query for i ). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Feng regarding the quantifier elimination optimization to the method of Luu in view of He and Akca in order to enable faster synthesis (e.g., Feng: VI Implementation, Page 10). With respect to claim 12, the combination of Luu in view of He, Feng, and Akca teaches the method of claim 8 (see rejection of claim 8 above) wherein the processor is configured to unroll a loop in the at least one program or inline a function to be called at each call site in the at least one program. (e.g., Luu, 5.2 Implementation, Page 263: Reentrancy Detection. We make use of path conditions in order to check for reentrancy vulnerability. At each CALL that is encountered, we obtain the path condition for the execution before the CALL is executed. We then check if such condition with updated variables (e.g., storage values) still holds (i.e., if the call can be executed again). If so, we consider this a vulnerability, since it is possible for the callee to re-execute the call before finishing it.). With respect to claim 13, the combination of Luu in view of He, Feng, and Akca teaches the method of claim 8 (see rejection of claim 8 above) wherein the at least one program comprises a smart contract. (e.g., Luu: 1 Introduction, Page 255: Oyente is a symbolic execution tool exclusively designed to analyze Ethereum smart contracts. It follows the execution model of Ethereum smart contracts [15] and directly works with Ethereum virtual machine (EVM) byte code without access to the high level representation (e.g., Solidity [16], Serpent [17]).). With respect to claim 14, the combination of Luu in view of He, Feng, and Akca teaches the method of claim 8 (see rejection of claim 8 above) wherein the processor is configured to obtain the verification condition using a verification condition generating function for the vulnerable transaction sequence candidate, and (e.g., Luu: 5.2 Implementation, Page 263: CFG Builder. CFGBuilder builds a skeletal control flow graph which contains all the basic blocks as nodes, and some edges representing jumps of which the targets can be determined by locally investigating the corresponding source nodes. However, some edges cannot be determined statically at this phase, thus they are constructed on the fly during symbolic execution in the later phase); the verification condition generating function uses a symbolic executor for a transaction sequence based on a symbolic executor for a transaction. (e.g., Luu: 5.2 Implementation, Page 263: Explorer. Our Explorer starts with the entry node of the skeletal CFG. At any one time, Explorer may be executing a number of symbolic states. The core of Explorer is an interpreter loop which gets a state to run and then symbolically executes a single instruction in the context of that state. This loop continues until there are no states remaining, or a user-defined timeout is reached.). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TAYLOR P VU whose telephone number is (703)756-1218. The examiner can normally be reached MON - FRI (7:30 - 5:00). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexander Lagor can be reached at (571) 270-5143. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /T.P.V./Examiner, Art Unit 2437 /ALEXANDER LAGOR/Supervisory Patent Examiner, Art Unit 2437
Read full office action

Prosecution Timeline

Oct 08, 2021
Application Filed
Dec 20, 2023
Non-Final Rejection — §103, §112
Feb 20, 2024
Response Filed
Mar 05, 2024
Final Rejection — §103, §112
May 10, 2024
Response after Non-Final Action
May 30, 2024
Examiner Interview (Telephonic)
Jun 12, 2024
Response after Non-Final Action
Jun 12, 2024
Request for Continued Examination
Jun 25, 2024
Response after Non-Final Action
Jun 28, 2024
Non-Final Rejection — §103, §112
Oct 07, 2024
Response Filed
Dec 12, 2024
Final Rejection — §103, §112
Apr 25, 2025
Response after Non-Final Action
May 19, 2025
Request for Continued Examination
May 25, 2025
Response after Non-Final Action
Aug 12, 2025
Non-Final Rejection — §103, §112
Oct 31, 2025
Response Filed
Feb 02, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12506662
SERVICE PROVISION METHOD, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Dec 23, 2025
Patent 12505223
System & Method for Detecting Vulnerabilities in Cloud-Native Web Applications
2y 5m to grant Granted Dec 23, 2025
Patent 12491837
ELECTRONIC SIGNAL BASED AUTHENTICATION SYSTEM AND METHOD THEREOF
2y 5m to grant Granted Dec 09, 2025
Patent 12411931
FUEL DISPENSER AUTHORIZATION AND CONTROL
2y 5m to grant Granted Sep 09, 2025
Patent 12399979
PROVISIONING A SECURITY COMPONENT FROM A CLOUD HOST TO A GUEST VIRTUAL RESOURCE UNIT
2y 5m to grant Granted Aug 26, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
81%
Grant Probability
94%
With Interview (+12.8%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 26 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month