Prosecution Insights
Last updated: April 19, 2026
Application No. 17/834,808

Using Information Flow For Security Exploration and Analysis

Non-Final OA §103
Filed
Jun 07, 2022
Examiner
DILUZIO, NICHOLAS JOSEPH
Art Unit
2498
Tech Center
2400 — Computer Networks
Assignee
Cycuity Inc.
OA Round
3 (Non-Final)
33%
Grant Probability
At Risk
3-4
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
4 granted / 12 resolved
-24.7% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
31 currently pending
Career history
43
Total Applications
across all art units

Statute-Specific Performance

§101
10.4%
-29.6% vs TC avg
§103
61.1%
+21.1% vs TC avg
§102
8.8%
-31.2% vs TC avg
§112
19.7%
-20.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/04/2025 has been entered. Response to Amendment Examiner has fully considered Applicant’s amendments to the Claims in the arguments filed on 12/04/2025. Claims 1-20 remain pending in the application. Claim objections of record are upheld herein in view of the amendments. Response to Arguments Applicant’s arguments filed 12/04/2025, with respect to the rejections of independent claims 1, 8, and 15 and their respective dependent claims under 35 USC 103 have been fully considered and are persuasive. Therefore, the rejections have been withdrawn. However, upon further consideration, new grounds of rejection are made in view of the previously applied reference from Hu, in addition to newly applied references from Deutschbein et al. (Deutschbein, C., Meza, A., Restuccia, F., Kastner, R., & Sturton, C. (2021). Isadora. Proceedings of the 5th Workshop on Attacks and Solutions in Hardware Security, 5–15. https://doi.org/10.1145/3474376.3487286), hereinafter Deutschbein, and Thykkoottathil et al. (Thykkoottathil, Subin, and Nagesh Ranganath (2018). Making Security Verification “SECURE”. DVCon 2018 – Design & Verification Conference, 2018. Accessed at: https://dvcon-proceedings.org/document/making-security-verification-secure/), hereinafter Thykkoottathil. Specifically, Deutschbein and Thykkoottathil combine to teach the newly added limitations “performing an information flow analysis process … including populating an information flow signal database that stores security results representing to which hardware modules of the hardware design the design asset flowed; determining, by querying the information flow signal database storing the security results with queries that each specify one or more individual hardware modules and security rule violations associated with the one or more hardware modules”; “a particular module named by a particular query representing a violation of the one or more security rules”; and “a visual indication of when the flow of the design asset through the particular module violated one or more of the security rules”. Regarding applicant’s argument beginning on P. 8 of Applicant Arguments, asserting that Hu does not teach or suggest a presentation with “a listing of the plurality of modules through which the design asset flowed”, the argument is moot in view of the newly applied reference from Thykkoottathil which teaches a visual presentation of an asset flow through a hardware design in a graph and/or waveform display, each listing hardware modules through which an asset flowed during analysis. Claim Objections Claims 3, 4, 6, 7, 10, 11, 13, 14, 17, 18, and 20 are objected to because of the following informalities: Each of the claims listed above contains an instance of “the tracked design asset”, for which there is no antecedent basis. The limitation could be re-written as: “the design asset”, or the independent claims could be amended to include “a tracked design asset”. In claims 4, 11, and 18, the limitation “the listing of modules” should read: “the listing of the plurality of modules” for clarity/consistency of claim language In claims 5, 12, and 19, the limitation “a signal within the selected module” should read: “one of the signals within the selected module” to clarify that the signal is among the illustrated signals within the selected module Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 8, and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hu et al. (US 20180032760 A1), hereinafter Hu, in view of Deutschbein et al. (Deutschbein, C., Meza, A., Restuccia, F., Kastner, R., & Sturton, C. (2021). Isadora. Proceedings of the 5th Workshop on Attacks and Solutions in Hardware Security, 5–15. https://doi.org/10.1145/3474376.3487286), hereinafter Deutschbein, and Thykkoottathil et al. (Thykkoottathil, Subin, and Nagesh Ranganath (2018). Making Security Verification “SECURE”. DVCon 2018 – Design & Verification Conference, 2018. Accessed from: https://dvcon-proceedings.org/document/making-security-verification-secure/), hereinafter Thykkoottathil. Regarding Claim 1: Hu teaches a computer-implemented method (Hu – Paragraph [0003]: The present application describes systems and methods relating to information flow tracking and detection of unintentional design flaws of digital devices and microprocessor systems) comprising: receiving a set of security rules for a hardware design having information flow tracking logic (Hu – Paragraph [0046]: According to the implementations, security properties for a hardware design can be received in a high-level security language 213. Implementing techniques for employing a high-level programming language for specifying secure information flow and/or security properties are described in connection with FIG. 1. In the implementations, information flow control mechanisms can operate to associate the security labels 210, 212 with the data and the hardware components, or resources, within the hardware security design 200. For example, security property 213 includes labels 214, 215 and specifies a hardware-level information flow. The security property 213 specifies a particular restriction to implement a secure information flow within a hardware design); performing an information flow analysis process to determine where a design asset specified by the security rules flowed through a plurality of modules in the hardware design (Hu – Paragraph [0029]: Gate Level Information Flow Tracking (GLIFT) is an information flow tracking technology that provides the capability for analyzing the flow of information in a hardware design by tracking the data as it moves throughout the system; and Paragraph [0050]: downstream implementation operations for configuring the hardware can be realized directly from the security properties 213 as specified in the high-level security language, and without requiring the creation of a security lattice 220. According to the implementation, the high-level security language can be employed as the mechanism for mapping to security labels of a hardware design. For example, operations defined by the high-level security language can be mapped to labels of a hardware design, thereby translating the hardware design into a logic that is enabled for information flow analysis … GLIFT is employed as a security platform to implement information flow analysis on the hardware configuration); determining … that the flow of the design asset through a particular module of the plurality of modules violated one or more of the security rules (Hu – Figure 6B: Example results of a GLIFT logic simulation; and Paragraph [0085]: The simulation 600 results serve to illustrate that GLIFT precisely captures when and where key 652 leakage happened (which functional testing and verification could not do). A designer could use these results to identify the location of Trojans throughout the design by using formal proofs on the GLIFT logic to backtrack from Antena to the key; and Paragraph [0013]: The methods and systems described can employe information flow tracking techniques so as to provide gate-level logic to detect various security threats, for example hardware Trojans, which violate security properties that can be specified by designers and relating to hardware elements that may be exposed to vulnerabilities (e.g., received from untrusted sources), such as third-party IP cores. Additionally, the methods and systems described leverage a precise gate level information flow model that can be described with standard hardware description language (HDL) and verified using conventional design mechanisms; and Paragraph [0058]: As illustrated in FIG. 4A, GLIFT can be employed as the IFT technology. In this case GLIFT can precisely measures and controls all logical flows from Boolean gates. Moreover, GLIFT can be used to craft secure hardware architectures and detect security violations from timing channels. Also, GLIFT can be used to formally verify that an information flow adheres to security properties related to confidentiality and integrity … Counterexamples found during formal verification reveal harmful information flows that point to design flaws or malicious hardware Trojans that cause the system to leak sensitive information and violate data integrity; and Paragraph [0059]: The techniques diagramed in FIG. 4A can be generally characterized as having three main parts: information flow tracking (e.g., GLIFT), detection of unintentional design flaws (e.g., hardware Trojans), and the derivation of security theorems to formally prove properties. FIG. 4A illustrates a logic synthesis 410 tool that compiles an IP core 405 design to a gate-level netlist 415. Thereafter, the gate-level information-flow tracking (GLIFT) logic 420 is automatically generated. Each gate is mapped to a GLIFT logic library 425, which can be completed in linear time. The GLIFT logic is formally verified 435 against a security property 440 that the designer has written. If it passes verification (as illustrated by the arrow labeled Pass) there is no Trojan 430. If it does not (illustrated by the arrow labeled Fail), a counterexample 445 is generated, which is used to functionally test 450 the GLIFT logic to derive Trojan behavior 455); and generating a user interface (Hu – Paragraph [0105]: [0105] To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input; and Paragraph [0106]: Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described is this specification, or any combination of one or more such back-end, middleware, or front-end components) presentation that presents [a listing of the plurality of] modules through which the design asset flowed and, for a particular module … a visual indication of when the flow of the design asset through the particular module violated one or more of the security rules (Hu – Paragraph [0013]: The methods and systems described can employed information flow tracking techniques so as to provide gate-level logic to detect various security threats, for example hardware Trojans, which violate security properties that can be specified by designers and relating to hardware elements that may be exposed to vulnerabilities (e.g., received from untrusted sources), such as third-party IP cores. Additionally, the methods and systems described leverage a precise gate level information flow model that can be described with standard hardware description language (HDL) and verified using conventional design mechanisms; and Paragraph [0058]: As illustrated in FIG. 4A, GLIFT can be employed as the IFT technology. In this case GLIFT can precisely measures and controls all logical flows from Boolean gates. Moreover, GLIFT can be used to craft secure hardware architectures and detect security violations from timing channels. Also, GLIFT can be used to formally verify that an information flow adheres to security properties related to confidentiality and integrity … Counterexamples found during formal verification reveal harmful information flows that point to design flaws or malicious hardware Trojans that cause the system to leak sensitive information and violate data integrity; and Paragraph [0059]: The techniques diagramed in FIG. 4A can be generally characterized as having three main parts: information flow tracking (e.g., GLIFT), detection of unintentional design flaws (e.g., hardware Trojans), and the derivation of security theorems to formally prove properties. FIG. 4A illustrates a logic synthesis 410 tool that compiles an IP core 405 design to a gate-level netlist 415. Thereafter, the gate-level information-flow tracking (GLIFT) logic 420 is automatically generated. Each gate is mapped to a GLIFT logic library 425, which can be completed in linear time. The GLIFT logic is formally verified 435 against a security property 440 that the designer has written. If it passes verification (as illustrated by the arrow labeled Pass) there is no Trojan 430. If it does not (illustrated by the arrow labeled Fail), a counterexample 445 is generated, which is used to functionally test 450 the GLIFT logic to derive Trojan behavior 455; and Figures 6A and 6B: Example benchmark and example results of a GLIFT logic simulation associated with the benchmark; and Paragraph [0084]: FIG. 6B is a diagram of example results from GLIFT logic simulation 600 in accordance with the disclosed techniques. In some cases, a goal of a simulation 600 is to reveal how the key 652 leaks to the Antena 657 output. The key 652 flows to the Antena 657 signal, when Antena_t 658 is HIGH, denoted by the black rectangles within the boxes with dashed lines, which denote the times when Antena_t 658 is HIGH. To have no leakage, Antena_t 658 must always be LOW; and Paragraph [0085]: The simulation 600 results serve to illustrate that GLIFT precisely captures when and where key 652 leakage happened (which functional testing and verification could not do). A designer could use these results to identify the location of Trojans throughout the design by using formal proofs on the GLIFT logic to backtrack from Antena to the key). Hu does not expressly teach performing an information flow analysis process … including populating an information flow signal database that stores security results representing to which hardware modules of the hardware design the design asset flowed; determining, by querying the information flow signal database storing the security results with queries that each specify one or more individual hardware modules and security rule violations associated with the one or more hardware modules, [that the flow of the design asset through a particular module of the plurality of modules violated one or more of the security rules]. However, Deutschbein teaches performing an information flow analysis process … including populating an information flow signal database that stores security results representing to which hardware modules of the hardware design the design asset flowed (Deutschbein – Section 2.1: Isadora uses IFT at the register transfer level [6] to track data flow between registers … Tracking proceeds as follows: for each signal s in the design, a new tracking signal s^T is added along with the logic needed to track how information propagates through a design. Once the tracking signals and tracking logic are added to the design, one or more signals may be set as the information source by initializing their associated tracking signals to a nonzero value; and Section 3 and 3.1: Isadora instruments the design with IFT logic and runs the instrumented design in simulation using the user-provided set of testbenches. The result is a trace set that specifies the value of every design signal and every tracking signal at each clock cycle during simulation … To generate a trace set, the design is instrumented with IFT logic and then executed in simulation with a testbench or sequence of testbenches providing input values to the design … The state 𝜎𝑖 of the design at time 𝑖 is defined by a list of triples describing the current value of every design signal and corresponding tracking signal in the instrumented design … The end result is a set of traces for design D and testbench T: TDT = {𝜏src, 𝜏src′, 𝜏src′′, . . .}. Each trace in this set describes how information can flow from a single input signal to the rest of the signals in the design. Taken together, this set of traces describes how information flows through the design during execution of the testbench T; and Section 5.4: Information flow hardware CWEs describe source signals, sink signals, and possibly conditions. CWEs provide high level descriptions, but Isadora targets an RTL definition. To apply these high level descriptions to RTL, we first group signals for a design by inspecting Verilog files and, if available, designer notes. With the groups established, we label every property by which group-to-group flows they contain … We use these groups to find CWE-relevant, low-level signals as sources, sinks, and conditions in an Isadora property; Examiner’s Comment: the trace set(s) generated based on the flow tracking taught by Deutschbein are analogous to the claimed flow signal database in that they mirror the capability to store tuples representing the flow signals of information through a hardware design over time); determining, by querying the information flow signal database storing the security results with queries that each specify one or more individual hardware modules and security rule violations associated with the one or more hardware modules (Deutschbein – Abstract: Isadora is a methodology for creating information flow specifications of hardware designs. The methodology combines information flow tracking and specification mining to produce a set of information flow properties that are suitable for use during the security validation process … Section 2: Isadora generates two styles of information flow properties: no-flow properties, in which there is no flow of information between two design elements; and conditional-flow properties, in which there exists some flow of information between two design elements, but only when the design is in a certain state; Section 3: Then, Isadora uses an inference engine (Daikon [20]) to infer, for every flow that occurred, the predicates that specify the conditions under which the flow occurred; and Section 3.3: In the third phase, Isadora finds the conditions under which a particular flow will occur … In order to isolate the conditions for information flow between two registers, Isadora uses 𝑆_flows to find all the trace times 𝑖 at which information flows from src to s during execution of the testbench. The corresponding trace(es) are then decomposed to produce a set of trace slices that are two clock cycles in length, one for each time 𝑖 … Using trace slices, or trace windows of length two, allows dynamic invariant detection to generation predicates specifying design state both immediately prior to and concurrent with the occurrence of some flow; Examiner’s Comment: Deutschbein’s filtering of a trace set to pinpoint instances of information flow relative to conditions (including no-flow conditions) mirrors the claimed capability to query a flow signal database for locations/times at which property violations occurred during a flow), that the flow of the design asset through a particular module of the plurality of modules [violated one or more of the security rules] (Deutschbein – Section 3: Then, Isadora uses an inference engine (Daikon [20]) to infer, for every flow that occurred, the predicates that specify the conditions under which the flow occurred; and Section 3.3: In the third phase, Isadora finds the conditions under which a particular flow will occur … In order to isolate the conditions for information flow between two registers, Isadora uses 𝑆_flows to find all the trace times 𝑖 at which information flows from src to s during execution of the testbench. The corresponding trace(es) are then decomposed to produce a set of trace slices that are two clock cycles in length, one for each time 𝑖 … Using trace slices, or trace windows of length two, allows dynamic invariant detection to generation predicates specifying design state both immediately prior to and concurrent with the occurence of some flow). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Hu, further incorporating Deutschbein to arrive at the conclusion of the claimed invention. One would be motivated to incorporate Deutschbein’s teaching of precise information flow tracking techniques to identify moments during a flow at which flow conditions/properties apply and should be ensured into Hu’s method for identifying security violations in a hardware design. This additional functionality would enhance Hu’s method with more precise security rules along with a method for determining the time and location during the flow at which rules were applied/violated. The combination of Hu and Deutschbein does not expressly teach a listing of the plurality of modules. However, Thykkoottathil teaches a user interface presentation that presents a listing of the plurality of modules through which the design asset flowed (Thykkoottathil – Figure 3: a graph view of information flow from a Source to a Destination of a hardware design, including defined hardware modules and accessible nodes in each of the modules throughout the design). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Hu and Deutschbein, further incorporating Thykkoottathil to arrive at the conclusion of the claimed invention. One would be motivated to incorporate Thykkoottathil’s teaching to display a visual representation of data flow of an asset through all modules of a hardware design into Hu and Deutschbein’s method for identifying security violations in a hardware design. This combination improves the method with further user convenience and efficiency in identifying proper and improper information flows. Regarding Claim 8: Claim 8 is a system claim with limitations corresponding to those of method Claim 1. Therefore, Claim 8 is rejected with the same rationale as that of the rejection of Claim 1. Hu further teaches A system comprising: one or more computers (Hu – Paragraph [0097]: The data processing apparatus 800 also includes hardware or firmware devices including one or more processors 812, one or more additional devices 814, a computer readable medium 816, a communication interface 818, and one or more user interface devices 820) and one or more storage devices storing instructions that are operable, when executed by the one or more computers, to cause the one or more computers to perform operations comprising (Hu – Paragraph [0097]: Each processor 812 is capable of processing instructions for execution within the data processing apparatus 800. In some implementations, the processor 812 is a single or multi-threaded processor. Each processor 812 is capable of processing instructions stored on the computer readable medium 816 or on a storage device such as one of the additional devices 814). Regarding Claim 15: Claim 15 is a non-transitory storage media claim with limitations corresponding to those of method Claim 1 and system Claim 8. Therefore, Claim 15 is rejected with the same rationale as that of the rejection of Claim 1 and Claim 8. Hu further teaches one or more non-transitory computer storage media encoded with computer program instructions that when executed by one or more computers cause the one or more computers to perform operations comprising (Hu – Paragraph [0097]: Each processor 812 is capable of processing instructions for execution within the data processing apparatus 800. In some implementations, the processor 812 is a single or multi-threaded processor. Each processor 812 is capable of processing instructions stored on the computer readable medium 816 or on a storage device such as one of the additional devices 814). Claim(s) 2-7, 9-14, and 16-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hu in view of Deutschbein, Thykkoottathil, and Modelsim (Mentor Graphics Corporation. (2016). ModelSim® User's Manual (Software Version 10.5b). Mentor Graphics Corporation. Retrieved from https://faculty-web.msoe.edu/johnsontimoj/Common/FILES/modelsim_user.pdf), hereinafter Modelsim. Regarding Claim 2: The combination of Hu, Deutschbein, and Thykkoottathil teaches the method of claim 1. The combination of Hu, Deutschbein, and Thykkoottathil does not expressly teach wherein the user interface presentation visually illustrates how long the design asset was held by each module. However, Modelsim teaches wherein the user interface presentation visually illustrates how long the design asset was held by each module (ModelSim – P. 359: Event Time — the time intervals that show each object value change as a separate event and that shows the relative order in which these changes occur; and Figure 9-8: example of waveform view of event time and/or delta time). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Hu, Deutschbein, and Thykkoottathil, further incorporating Modelsim to arrive at the conclusion of the claimed invention. One would be motivated to incorporate Modelsim’s teaching of an interface that provides a user with an in-depth display of a user-specified circuit design simulation into Hu, Deutschbein, and Thykkoottathil’s method for identifying security violations in a hardware design. This combined functionality would provide a user with precision insight into potential flaws and security vulnerabilities in a tested hardware design. Regarding Claim 3: The combination of Hu, Deutschbein, Thykkoottathil, and Modelsim teaches the method of claim 2. Modelsim further teaches wherein the user interface presentation visually distinguishes the tracked design asset from other signals (Modelsim – P. 374: Note that the buttons in this dialog box allow you to determine the display of signals you want to put into an expression: • List only Select Signals — list only those signals that are currently selected in the parent window. • List All Signals — list all signals currently available in the parent window). The motivation to combine the arts is the same as that of Claim 2. Regarding Claim 4: The combination of Hu, Deutschbein, and Thykkoottathil teaches the method of Claim 1. The combination of Hu, Deutschbein, and Thykkoottathil does not expressly teach further comprising: receiving a selection of a particular module of the listing of modules; and generating a waveform view that illustrates signals within the selected module and visually distinguishes signals carrying the tracked design asset within the module. However, Modelsim teaches further comprising: receiving a selection of a particular module of the listing of modules (Modelsim – P. 428: A primary use of the Dataflow window is exploring the “physical” connectivity of your design. One way of doing this is by expanding the view from process to process. This allows you to see the drivers/readers of a particular signal, net, or register. You can expand the view of your design using menu commands or your mouse. To expand with the mouse, simply double click a signal, register, or process. Depending on the specific object you click, the view will expand to show the driving process and interconnect, the reading process and interconnect, or both. Alternatively, you can select a signal, register, or net, and use one of the toolbar buttons or drop down menu commands described in Table 10-1); and generating a waveform view that illustrates signals within the selected module and visually distinguishes signals carrying the tracked design asset within the module (Modelsim – P. 433, Figure 10-8: Wave Viewer displaying inputs and outputs of a selected process). It would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to modify Hu, Deutschbein, and Thykkoottathil, further incorporating Modelsim to arrive at the conclusion of the claimed invention. One would be motivated to incorporate Modelsim’s teaching of an interface that provides a user with an in-depth display of a user-specified circuit design simulation into Hu, Deutschbein, and Thykkoottathil’s method for identifying security violations in a hardware design. This addition would provide yet further insight into potential flaws and security vulnerabilities in a tested hardware design. Regarding Claim 5: The combination of Hu, Deutschbein, Thykkoottathil, and Modelsim teaches the method of claim 4. Modelsim further teaches further comprising generating a waveform view that illustrates forward loads from a signal within the selected module (Modelsim – P. 433, Figure 10-8: Wave Viewer displaying inputs and outputs of a selected process). The motivation to combine the arts is the same as that of Claim 4. Regarding Claim 6: The combination of Hu, Deutschbein, Thykkoottathil, and Modelsim teaches the method of claim 5. Modelsim further teaches further comprising: receiving a user interface selection of a particular signal (Modelsim – P. 374: Note that the buttons in this dialog box allow you to determine the display of signals you want to put into an expression: • List only Select Signals — list only those signals that are currently selected in the parent window. • List All Signals — list all signals currently available in the parent window); and generating a path view that illustrates a path taken by the tracked design asset through the design (Modelsim – P. 431, Figure 10-6: illustration of a highlighted path of a tracked signal). The motivation to combine the arts is the same as that of Claim 4. Regarding Claim 7: The combination of Hu, Deutschbein, Thykkoottathil, and Modelsim teaches the method of claim 6. Modelsim further teaches wherein the path view illustrates how the tracked design asset flows from an origin through a plurality of design resources to the particular signal (Modelsim – P. 431, Figure 10-6: illustration of a highlighted path of a tracked signal). The motivation to combine the arts is the same as that of Claim 4. Regarding Claim 9: Claim 9 is a system claim with limitations corresponding to those of method Claim 2. Therefore, Claim 9 is rejected with the same combination and rationale as that of the rejection of Claim 2. Regarding Claim 10: Claim 10 is a system claim with limitations corresponding to those of method Claim 3. Therefore, Claim 10 is rejected with the same combination and rationale as that of the rejection of Claim 3. Regarding Claim 11: Claim 11 is a system claim with limitations corresponding to those of method Claim 4. Therefore, Claim 11 is rejected with the same combination and rationale as that of the rejection of Claim 4. Regarding Claim 12: Claim 12 is a system claim with limitations corresponding to those of method Claim 5. Therefore, Claim 12 is rejected with the same combination and rationale as that of the rejection of Claim 5. Regarding Claim 13: Claim 13 is a system claim with limitations corresponding to those of method Claim 6. Therefore, Claim 13 is rejected with the same combination and rationale as that of the rejection of Claim 6. Regarding Claim 14: Claim 14 is a system claim with limitations corresponding to those of method Claim 7. Therefore, Claim 14 is rejected with the same combination and rationale as that of the rejection of Claim 7. Regarding Claim 16: Claim 16 is a system claim with limitations corresponding to those of method Claim 2 and system Claim 9. Therefore, Claim 16 is rejected with the same combination and rationale as that of the rejection of Claim 2 and Claim 9. Regarding Claim 17: Claim 17 is a system claim with limitations corresponding to those of method Claim 3 and system Claim 10. Therefore, Claim 17 is rejected with the same combination and rationale as that of the rejection of Claim 3 and Claim 10. Regarding Claim 18: Claim 18 is a system claim with limitations corresponding to those of method Claim 4 and system Claim 11. Therefore, Claim 18 is rejected with the same combination and rationale as that of the rejection of Claim 4 and Claim 11. Regarding Claim 19: Claim 19 is a system claim with limitations corresponding to those of method Claim 5 and system Claim 12. Therefore, Claim 19 is rejected with the same combination and rationale as that of the rejection of Claim 5 and Claim 12. Regarding Claim 20: Claim 20 is a system claim with limitations corresponding to those of method Claim 6 and system Claim 13. Therefore, Claim 20 is rejected with the same combination and rationale as that of the rejection of Claim 6 and Claim 13. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Hu et al. (Hu, W., Ardeshiricham, A., Wu, L., Kastner, R. (2022). Integrating Information Flow Tracking into High-Level Synthesis Design Flow. In: Katkoori, S., Islam, S.A. (eds) Behavioral Synthesis for Hardware Security. Springer, Cham. https://doi.org/10.1007/978-3-030-78841-4_16) teaches a method of information flow tracking to identify and display counterexamples to assertions on hardware designs Yoon et al. (US 20160217029 A1) teaches methods and systems for tracking data flows and identifying tainted operations based on heuristics Cherupalli et al. (US 20190102563 A1) teaches a method for receiving an application and a corresponding security policy for the execution of the application in order to detect information flow violations during simulated execution Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS JOSEPH DILUZIO whose telephone number is (703)756-1229. The examiner can normally be reached Mon - Fri -- 7:30 AM - 5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Yin-Chen Shaw can be reached at 571-272-8878. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NICHOLAS JOSEPH DILUZIO/Examiner, Art Unit 2498 /YIN CHEN SHAW/Supervisory Patent Examiner, Art Unit 2498
Read full office action

Prosecution Timeline

Jun 07, 2022
Application Filed
Jun 24, 2024
Non-Final Rejection — §103
Oct 09, 2024
Applicant Interview (Telephonic)
Oct 31, 2024
Examiner Interview Summary
Nov 04, 2024
Response Filed
May 29, 2025
Final Rejection — §103
Dec 04, 2025
Request for Continued Examination
Dec 18, 2025
Response after Non-Final Action
Jan 08, 2026
Non-Final Rejection — §103
Apr 13, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596792
DATA ENCRYPTION DETECTION
2y 5m to grant Granted Apr 07, 2026
Patent 12490087
AUTHENTICATION SERVER FUNCTION SELECTION IN AN AUTHENTICATION AND KEY AGREEMENT
2y 5m to grant Granted Dec 02, 2025
Patent 12475218
METHOD AND SYSTEM FOR IDENTIFYING A COMPROMISED POINT-OF-SALE TERMINAL NETWORK
2y 5m to grant Granted Nov 18, 2025
Patent 12367440
ARTIFICIAL INTELLIGENCE-BASED SYSTEM AND METHOD FOR FACILITATING MANAGEMENT OF THREATS FOR AN ORGANIZATON
2y 5m to grant Granted Jul 22, 2025
Patent 11966466
UNIFIED WORKLOAD RUNTIME PROTECTION
2y 5m to grant Granted Apr 23, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
33%
Grant Probability
99%
With Interview (+100.0%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month