Prosecution Insights
Last updated: April 19, 2026
Application No. 17/492,736

Firmware Protection

Non-Final OA §103§112
Filed
Oct 04, 2021
Examiner
VU, TAYLOR P
Art Unit
2437
Tech Center
2400 — Computer Networks
Assignee
Jfrog Ltd.
OA Round
6 (Non-Final)
81%
Grant Probability
Favorable
6-7
OA Rounds
3y 3m
To Grant
94%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
21 granted / 26 resolved
+22.8% vs TC avg
Moderate +13% lift
Without
With
+12.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
30 currently pending
Career history
56
Total Applications
across all art units

Statute-Specific Performance

§101
12.3%
-27.7% vs TC avg
§103
72.0%
+32.0% vs TC avg
§102
2.2%
-37.8% vs TC avg
§112
12.5%
-27.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 26 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/02/2026 has been entered. Response to Arguments The present office action is responsive to communications filed on 01/02/2026. Claims 1, 15, and 20 have been amended. Claims 1-7, 9-15, and 20-24 are currently pending. Applicant’s arguments filed on 01/02/2026, with respect to the rejections of claim Claims 1-5,7,9-10,15,20-21, and 24 with regards to 35 U.S.C. 103 over Monastyrsky et al. (US PGPub No.20200210591-A1) in view of Balinsky et al. (US PGPub No. 20130219453-A1) and Harel et al. (US PGPub No. 20180349598-A1), as seen in pages 7-10 of the Remarks, have been fully considered and are persuasive. Therefore, the rejection of have been withdrawn. However, upon further consideration, in new grounds of rejection is made in view of Rajagopalan et al. ("System Call Monitoring Using Authenticated System Calls", (Year:2006)) and Atighetchi et al. (US PG Pub No. 20200252418-A1). Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-7, 9-13, 15, and 20-24 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, for failure to comply with the written description requirement. The claims contain subject matter that was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor (or, for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s)) had possession of the claimed invention at the time of filing. Claims 1, 15, and 20 recite, in relevant part, “perform[ing] a responsive action” in response to determining that a system call event violates one or more constraints on execution of the system call. The individual claims thus encompass, on their face, a broad functional genus of “responsive actions” triggered by a constraint-violating system call event. The specification describes several concrete examples of actions taken when a constraint is violated, including terminating the system call event, terminating a process executing the system call, terminating the system, terminating execution of the firmware, disabling the system call, reporting the system call event, deploying a mask over inputs, and preventing a scenario from occurring, each followed by “or the like.” These examples all fall within a recognizable family of mitigation or logging operations directed to addressing abnormal or attack-type system call behavior. However, the claims are not expressly limited to this mitigation-focused family. As drafted, “responsive action” reads on essentially any action taken by the system upon detecting that the constraints are violated, including actions that are qualitatively different from the particular mitigation or logging species disclosed. The application does not describe other types of “responsive actions” beyond the listed mitigation examples and does not identify common structural or operational features that would allow a person of ordinary skill in the art to recognize the full scope of the asserted genus of “responsive actions” as now claimed. See MPEP 2161 and 2163 (explaining that generic or functional language must be supported by either a representative number of species or identification of common features such that the full genus is shown to be possessed). In Ariad and its progeny, the Federal Circuit explained that for a functionally defined genus, the written description requirement demands more than recitation of a desired result. The specification must demonstrate that the inventor actually possessed the claimed genus, for example by disclosing a representative number of species commensurate with the claim scope or by describing common structural or functional features of the genus. See MPEP 2161 (citing Ariad and Enzo). Here, the disclosed species cluster around a narrower genus of security-mitigation responses (terminate, disable, report, prevent abnormal behavior), but the claim language “responsive action” is broad enough to cover additional, more remote types of system behavior that are neither exemplified nor characterized in the specification. Although Nautilus v. Biosig primarily addresses the definiteness requirement of 35 U.S.C. 112(b), the Supreme Court emphasized that a patent is invalid for indefiniteness if its claims, read in light of the specification delineating the patent, fail to inform, with reasonable certainty, those skilled in the art about the scope of the invention. That same disclosure-based framework underlies the written description inquiry in 35 U.S.C. 112(a): the four corners of the specification must show that the inventor had possession of the subject matter of the claim as filed. See MPEP 2163. In the present case, while the specification supports a genus of mitigation or logging type responses to constraint-violating system calls, it does not reasonably convey possession of every conceivable “responsive action” that could fall within the very broad functional language of the claim. The generic term “responsive action,” without any express limitation in the claim to mitigation or logging operations, therefore extends beyond what the specification demonstrates was actually invented. Generic functional language of this breadth, unsupported by either a representative number of species commensurate with the full scope or clearly articulated common features defining that full scope, does not satisfy the written description requirement of 35 U.S.C. 112(a). See MPEP 2161 and 2163. How to overcome this rejection: The easiest way would be to incorporate the limitations of claim 14 into the independent claims 1, 15, and 20. However, the applicant is invited to amend the claims in any other manner to more closely align “responsive action” with the mitigation-focused actions actually described in the specification (for example, terminating or disabling the system call, terminating the associated process or system, reporting or logging the event, preventing the abnormal scenario from occurring, or like security mitigation actions) or to otherwise point to specific disclosure demonstrating possession of the full breadth of “responsive action” as currently claimed. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claims 1-5,7,9-10,15,20 , and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Monastyrsky et al. (US PGPub No.20200210591-A1) in view of Balinsky et al. (US PGPub No. 20130219453-A1) and Rajagopalan et al. ("System Call Monitoring Using Authenticated System Calls", (Year:2006))). With respect to claim 1, Monastyrsky teaches method comprising: obtaining metadata about a firmware, wherein the metadata comprises one or more constraints on execution of a system call by the firmware, (¶0011: in which the trigger describes conditions (constraints) along with an event associated with a file execution attempt thereby the trigger comprises of constraints and ¶ 0051 further details multiple conditions (one or more constraints) which a trigger can have); identifying a system call event, wherein the system call event comprises an invocation of the system call; (Figure 3: 320 illustrates identifying calls supplemented with ¶0093 further details upon opening file in which an event is discovered and ¶0050 illustrates the event invoking a call that notifies the OS, which all together teaches the identifying of system call event); determining that the system call event violates the one or more constraints on the execution of the system call; and (¶0011 upon describes event where the during execution responding to conditions and ¶0095 further details Figure 3 320 a step wherein the conditions are satisfied upon execution (determined call event violation) of a file with vulnerabilities described in a previous step Figure 3 310); in response to said determining that the system call event violates the one or more constraints, performing a responsive action; (Figure 3: 320 illustrates the step of identification of the event and 340 illustrates step wherein a response (responsive action) is conducted when the conditions are fulfilled ¶0060 further details the response action in an event when the conditions of the trigger); Monastyrsky does not disclose: wherein the one or more constraints comprise a constraint on a memory location which stores the system call being invoked, wherein the constraint on the memory location is a constraint that the memory location is included in a collection of one or more authorized memory locations with respect to the system call; during execution of the firmware, However, Balinsky teaches wherein the one or more constraints comprise a constraint on a memory location which stores the system call being invoked, (¶0016: The policy may include a policy identifier an action associated with system call(s) to be captured (stored), a policy condition (constraint) that the document contents and/or meta data must satisfy for the policy to become applicable, and a policy action that will be implemented if the policy condition is satisfied). wherein the constraint on the memory location is a constraint that the memory location (¶0025: If metadata indicates that the document data is in a publicly known file format then the document data and metadata are automatically passed having stored policies. The appropriate policies are then applied to captured data 210 depending on the result of the parsing) is included in a collection of one or more authorized memory locations with respect to the system call; (¶0018-0019: As seen in Figure 1 and 2, the local device operating system 111 is also coupled to a policy decision engine 109. The policy decision engine 109 determines whether the application 101 transmitting the system call is a known/unauthorized application 200 or unknown/unauthorized application 201. If the application 101 is recognized then its behavior is either known to be safe. ) during execution of the firmware, (¶0008: The application 101 can be any user application(s) or routine (e.g., software, firmware) that can be run by the operating system 111 (e.g., WINDOWS) or used by the local device 100 over the network 161 while running on another computer (e.g., server)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Balinsky regarding constraint to the method of Monastyrsky in order to prevent data leak prevention (Balinsky: ¶0001-0002 & 0006). Monastyrsky in view of Balinsky does not disclose: wherein the constraint on the memory location is a constraint that the memory location is included in a collection of one or more authorized memory locations with respect to the system call; wherein the one or more constraints comprise a mapping of one or more memory locations to one or more respective system calls of the firmware, each location having a corresponding mapped memory address; wherein the system calls are allowed to originate only from corresponding mapped addresses of memory locations where the one or more respective system calls are stored. Although, Balinsky does disclose a constraint with respect to memory location and execution of a firmware but does not explicitly disclose collection of one or more authorized memory locations. However, Rajagopalan teaches a constraint that the memory location is included in a collection of one or more authorized memory locations with respect to the system call; (Pages 217-218 , 2.1 Policy Expressiveness: In system call monitoring, each system call in a program has an associated system call policy that specifies properties that must be satisfied when the call is executed. The program’s overall policy is the collection of its system call policies. In principle, a system call monitor should be able to enforce any computable policy…The properties expressed by a system call policy can be viewed as constraints on the execution of the system call. A typical policy, for example, may require that a system call be constrained to a specific system call number (name), or must be invoked from a particular memory address in the program (call site), or both. ); wherein the one or more constraints comprise a mapping of one or more memory locations to one or more respective system calls of the firmware, each location having a corresponding mapped memory address; (Page 217-218, 2.1 Policy Expressiveness: System call policies can also constrain more global behaviors, such as the acceptable order of system call executions. For example, the collection of system call policies for a program might constrain the application’s system call trace to be a path in the call graph. In this case, each system call policy could include a list of system calls that are possible predecessors for the given call. ) wherein the system calls are allowed to originate only from corresponding mapped addresses of memory locations where the one or more respective system calls are stored. (Pages 221-222, 3.4 System Call Checking: The kernel enforces an application’s system call policies at runtime. When an authenticated system call occurs, the kernel receives the normal arguments of the system call—the system call number and the arguments to the original unmodified call—and the five additional arguments—the policy descriptor (polDes), the block number of the system call (blockID), the set of predecessors stored as an authenticated string (predSet), a pointer (lbPtr) to the lastBlock policy state and last block MAC (lbMAC), and the call MAC (callMAC). Furthermore, it can determine the call site based on the return address of the kernel interrupt handler.); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Rajagopalan regarding a constraint that the memory location is included in a collection of one or more authorized memory location with respect to the system call to the method of Monastyrsky in view of Balinsky in order to mitigate attacks such as buffer overflow. (Rajagopalan: Page 217-218). With respect to claim 2, the combination of Monastyrsky in view of Balinsky and Rajagopalan teaches the method of claim 1 (see rejection of claim 1 above) but does not disclose wherein the one or more constraints comprise a constraint over an argument value of an argument of the system call, (Rajagopalan Page 217-218, 2.1 Policy Expressiveness: The properties expressed by a system call policy can be viewed as constraints on the execution of the system call… It might also specify allowed values for the arguments, using either concrete values (e.g., “5” or “/dev/console”) or patterns (e.g., “/tmp/*”). Policies of these types are used in many existing system call monitoring systems. For example, Systrace supports policies in which the system call number and the argument values can be specified, the latter using either patterns or concrete values.) wherein the constraint over the argument value excludes at least one legal value of the argument. (Rajagopalan Pages 219, 3.2 Authenticated System Calls : Policy variations. The policy must describe which system call properties are included in the policy for a given system call. For example, some of the argument values may be constrained while others may remain unconstrained…The need to support policy variation is addressed by constructing for each system call a policy descriptor, a 32-bit integer that encodes information about which properties of the system call are constrained by its policy. This descriptor uses bits to indicate whether the value of each argument is determined by the policy. It also indicates whether the control flow policy for the call is specified.); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Rajagopalan regarding the one or more constraints comprise a constraint over an argument value of an argument of the system call with respect to the system call to the method of Monastyrsky in view of Balinsky in order to mitigate attacks such as buffer overflow. (Rajagopalan: Page 217-218). With respect to claim 3, the combination of Monastyrsky in view of Balinsky and Rajagopalan teaches the method of claim 1 (see rejection of claim 1 above) wherein the one or more constraints comprise a constraint on a call chain that invoked the system call, (Monastyrsky: ¶0012: describes when vulnerabilities being breached generates an event of chain calls and ¶0086 further details one of the conditions of a trigger comprising of generation of a chain of calls); wherein the call chain comprises an ordered sequence of calling functions, wherein the method comprises: (Monastyrsky: ¶0011: describes function calls comprises of previous sequence of calls and ¶0093 describes the chain of function calls are in a form of call and return addresses preceding an event); analyzing a call stack to identify a current sequence of calling functions in the call stack; (Monastyrsky: ¶ 0012 describes call stack being analyzed based on event chain call functions and ¶0094 further details identifying stack in form of a sequence of functions call during the exploitation); and determining that the current sequence of calling functions violates the constraint on the call chain. (Monastyrsky: ¶0012 describes analysis of calling functions with the conditions in the trigger and ¶0095 further details in which the conditions are fulfilled on the chain of function calls). With respect to claim 4, the combination of Monastyrsky in view of Balinsky and Rajagopalan teaches the method of claim 1 (see rejection of claim 1 above) wherein the one or more constraints comprise a temporal constraint on arguments provided with the system call, (Monastyrsky: ¶0067: describes an example of constraint from a memory page); wherein the temporal constraint is determined based on previous executions of the system call. (Monastyrsky: ¶0067: As seen in Figure 1, the interceptor 130 has discovered an event: a first instance of an execution on a certain memory page (based on previous execution with the inclusion of the first) The interceptor analyzes the memory page; if the page does not correspond to the loaded modules and libraries, it is possible to save the address of the memory page from which the first instance of execution occurred, and also the contents of the memory page itself for further analysis, e.g., in the log 150.). With respect to claim 5, the combination of Monastyrsky in view of Balinsky and Rajagopalan teaches the method of claim 1 (see rejection of claim 1 above) wherein the one or more constraints comprise a constraint on a calling function that invoked the system call. (Rajagopalan Pages 226 , 5.1 Argument Patterns: Many system call monitoring systems allow policies that specify that an argument of a system call should match a pattern given by a regular expression. This is particularly useful for temporary files, whose names are often computed dynamically using library functions like mkstemp. A typical example of a pattern is “/tmp/*.” Patterns can be specified by the security administrator or could be partially automated by using static and dynamic profiling. The patterns can be stored as authenticated strings. The associated MAC checking will ensure an attack cannot modify the patterns or substitute different patterns for a system call.); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Rajagopalan regarding wherein the one or more constraints comprise a constraint on a calling function that invoked the system call with respect to the system call to the method of Monastyrsky in view of Balinsky in order to ensure an attack cannot modify the patterns or substitute patterns for a system call (Rajagopalan: Page 226). With respect to claim 7, the combination of Monastyrsky in view of Balinsky and Rajagopalan teaches the method of claim 5 (see rejection of claim 5 above) wherein the constraint comprises a constraint on a memory location of the calling function that invoiced the system call, wherein one or more memory locations are authorized for the being calling functions of the system call, (Pages 217-218 , 2.1 Policy Expressiveness: In system call monitoring, each system call in a program has an associated system call policy that specifies properties that must be satisfied when the call is executed. The program’s overall policy is the collection of its system call policies. In principle, a system call monitor should be able to enforce any computable policy…The properties expressed by a system call policy can be viewed as constraints on the execution of the system call. A typical policy, for example, may require that a system call be constrained to a specific system call number (name), or must be invoked from a particular memory address in the program (call site), or both. ); wherein the constraint defines that the memory location of the calling function must belong to the one or more memory locations of calling functions that are authorized to invoke the system call. (Pages 217-218 , 5.2 Policy Expressiveness: In system call monitoring, each system call in a program has an associated system call policy that specifies properties that must be satisfied when the call is executed. The program’s overall policy is the collection of its system call policies. In principle, a system call monitor should be able to enforce any computable policy…The properties expressed by a system call policy can be viewed as constraints on the execution of the system call. A typical policy, for example, may require that a system call be constrained to a specific system call number (name), or must be invoked from a particular memory address in the program (call site), or both. ); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Rajagopalan regarding the constraint comprises a constraint on a memory location of the calling function that invoiced the system call, wherein one or more memory locations are authorized for the being calling functions of the system call to the method of Monastyrsky in view of Balinsky in order to ensure an attack cannot modify the patterns or substitute patterns for a system call (Rajagopalan: Page 226). With respect to claim 9, the combination of Monastyrsky in view of Balinsky and Rajagopalan teaches the method of claim 1 (see rejection of claim 1 above), wherein the metadata comprises the collection of one or more authorized memory locations which store the system call being invoked; (Monastyrsky: ¶0064: interceptor that calls on function checks the dynamic library or module that was in the address space); wherein said determining that the system call event violates the one or more constraints (Page 218 , 2.3 Policy Enforcement: A system call monitor checks each system call against its policy at runtime and either accepts or rejects the call. This security-critical check can be performed in user space or in the kernel. Systems that intercept system calls in user space [18], [19] can be vulnerable to corruption by such exploits as buffer overflows when applied to programs written in unsafe languages.) on the execution of the system call comprises determining that the system call is invoked from a location that is not in the collection. (Pages 217-218, 2.1 Policy Expressiveness: System call policies can also constrain more global behaviors, such as the acceptable order of system call executions. For example, the collection of system call policies for a program might constrain the application’s system call trace to be a path in the call graph. In this case, each system call policy could include a list of system calls that are possible predecessors for the given call); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Rajagopalan regarding determining that the system call event violates the one or more constraints with respect to the system call to the method of Monastyrsky in view of Balinsky in order to mitigate attacks such as buffer overflow. (Rajagopalan: Page 217-218). With respect to claim 10, the combination of Monastyrsky in view of Balinsky and Rajagopalan teaches the method of claim 1 (see rejection of claim 1 above), wherein the metadata is determined using at least one of: static analysis performed without executing the firmware; dynamic analysis performed by executing the firmware prior to the execution of the firmware, and prior to identifying the system call event; symbolic execution performed while tracking symbolic values of the firmware; and (Monastyrsky: ¶ 0077-0078: an example wherein heuristic analysis is used as a dynamic analysis due to the incorporation of virtual machines to isolate and test a suspicious program or file (which is further corroborate in ¶0030 and Figure 1) , and the performance by executing the firmware prior to execution of the firmware (an actual event ) , and prior to identifying the system call event (wherein the analysis discovers a trigger which accompany an event, and ¶0049 describes execution after checking for vulnerabilities teaching the at least one of aspect that’s being claimed); concolic execution performed while tracking both symbolic and concrete values of the firmware. (Monastyrsky: ¶0039 execution formed as a log with previous disclosure of ¶0038 processor registers ¶0093-0094: wherein steps 320-340 of Figure 3, tracks a chain of function calls (in which it can incorporate symbolic values and concrete values)). With respect to claim 15, Monastyrsky teaches an apparatus comprising a processor and coupled memory, said processor being adapted to: obtain metadata about a firmware, wherein the metadata comprises one or more constraints on execution of a system call by the firmware, (¶0011: in which the trigger describes conditions (constraints) along with an event associated with a file execution attempt thereby the trigger comprises of constraints and ¶ 0051 further details multiple conditions (one or more constraints) which a trigger can have); during execution of the firmware, identify a system call event, wherein the system call event comprises an invocation of the system call; (Figure 3 320 illustrates identifying calls supplemented with ¶0093 further details upon opening file in which an event is discovered and ¶0050 illustrates the event invoking a call that notifies the OS). determine that the system call event violates the one or more constraints on the execution of the system call; and (¶0011 upon describes event where the during execution responding to conditions and ¶0095 further details Figure 3 320 a step wherein the conditions are satisfied upon execution (determined call event violation) of a file with vulnerabilities described in a previous step Figure 3 310); in response to determining that the system call event violates the one or more constraints, perform a responsive action; (Figure 3 320 illustrates the step of identification of the event and 340 illustrates step wherein a response is conducted when the conditions are fulfilled ¶0060 further details the response action in an event when the conditions of the trigger). Monastyrsky does not disclose: wherein the one or more constraints comprise a constraint on a memory location which stores the system call being invoked, wherein the constraint on the memory location is a constraint that the memory location is included in a collection of one or more authorized memory locations with respect to the system call; However, Balinsky teaches wherein the one or more constraints comprise a constraint on a memory location which stores the system call being invoked, (¶0016: The policy may include a policy identifier an action associated with system call(s) to be captured (stored), a policy condition (constraint) that the document contents and/or meta data must satisfy for the policy to become applicable, and a policy action that will be implemented if the policy condition is satisfied). wherein the constraint on the memory location is a constraint that the memory location (¶0025: If metadata indicates that the document data is in a publicly known file format then the document data and metadata are automatically passed having stored policies. The appropriate policies are then applied to captured data 210 depending on the result of the parsing) is included in a collection of one or more authorized memory locations with respect to the system call; (¶0018-0019: As seen in Figure 1 and 2, the local device operating system 111 is also coupled to a policy decision engine 109. The policy decision engine 109 determines whether the application 101 transmitting the system call is a known/unauthorized application 200 or unknown/unauthorized application 201. If the application 101 is recognized then its behavior is either known to be safe. ) during execution of the firmware, (¶0008: The application 101 can be any user application(s) or routine (e.g., software, firmware) that can be run by the operating system 111 (e.g., WINDOWS) or used by the local device 100 over the network 161 while running on another computer (e.g., server)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Balinsky regarding constraint to the method Monastyrsky in order to prevent data leak prevention (Balinsky: ¶0001-0002 & 0006). Monastyrsky in view of Balinsky does not disclose: a constraint that the memory location is included in a collection of one or more authorized memory locations with respect to the system call; wherein the one or more constraints comprise a mapping of one or more memory locations to one or more respective system calls of the firmware, each location having a corresponding mapped memory address; wherein the system calls are allowed to originate only from corresponding mapped addresses of memory locations where the one or more respective system calls are stored. Although, Balinsky does disclose a constraint with respect to memory location and execution of a firmware but does not explicitly disclose collection of one or more authorized memory locations. However, Rajagopalan teaches a constraint that the memory location is included in a collection of one or more authorized memory locations with respect to the system call; (Pages 217-218 , 2.1 Policy Expressiveness: In system call monitoring, each system call in a program has an associated system call policy that specifies properties that must be satisfied when the call is executed. The program’s overall policy is the collection of its system call policies. In principle, a system call monitor should be able to enforce any computable policy…The properties expressed by a system call policy can be viewed as constraints on the execution of the system call. A typical policy, for example, may require that a system call be constrained to a specific system call number (name), or must be invoked from a particular memory address in the program (call site), or both. ); wherein the one or more constraints comprise a mapping of one or more memory locations to one or more respective system calls of the firmware, each location having a corresponding mapped memory address; (Page 217-218, 2.1 Policy Expressiveness: System call policies can also constrain more global behaviors, such as the acceptable order of system call executions. For example, the collection of system call policies for a program might constrain the application’s system call trace to be a path in the call graph. In this case, each system call policy could include a list of system calls that are possible predecessors for the given call. ) wherein the system calls are allowed to originate only from corresponding mapped addresses of memory locations where the one or more respective system calls are stored. (Pages 221-222, 3.4 System Call Checking: The kernel enforces an application’s system call policies at runtime. When an authenticated system call occurs, the kernel receives the normal arguments of the system call—the system call number and the arguments to the original unmodified call—and the five additional arguments—the policy descriptor (polDes), the block number of the system call (blockID), the set of predecessors stored as an authenticated string (predSet), a pointer (lbPtr) to the lastBlock policy state and last block MAC (lbMAC), and the call MAC (callMAC). Furthermore, it can determine the call site based on the return address of the kernel interrupt handler.); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Rajagopalan regarding a constraint that the memory location is included in a collection of one or more authorized memory location with respect to the system call to the method of Monastyrsky in view of Balinsky in order to mitigate attacks such as buffer overflow. (Rajagopalan: Page 217-218). With respect to claim 20, Monastyrsky teaches a computer program product comprising a non-transitory computer readable storage medium retaining program instructions, which program instructions when read by a processor, cause the processor to: (¶0103-0104: aspects of the present disclosure may be a system, a method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure) obtain metadata about a firmware, wherein the metadata comprises one or more constraints on execution of a system call by the firmware, (¶0011 in which the trigger describes conditions (constraints) along with an event associated with a file execution attempt thereby the trigger comprises of constraints and ¶0051 further details multiple conditions (one or more constraints) which a trigger can have); during execution of the firmware, identify a system call event, wherein the system call event comprises an invocation of the system call; (Figure 3 320 illustrates identifying calls supplemented with ¶0093 further details upon opening file in which an event is discovered and ¶0050 illustrates the event invoking a call that notifies the OS); determine that the system call event violates the one or more constraints on the execution of the system call; and (¶0011 upon describes event where the during execution responding to conditions and ¶0095 further details Figure 3 320 a step wherein the conditions are satisfied upon execution (determined call event violation) of a file with vulnerabilities described in a previous step Figure 3 310); in response to determining that the system call event violates the one or more constraints, perform a responsive action; (Figure 3 320 illustrates the step of identification of the event and 340 illustrates step wherein a response is conducted when the conditions are fulfilled ¶ 0060 further details the response action in an event when the conditions of the trigger). Monastyrsky does not disclose: wherein the metadata comprises one or more constraints on execution of a system call by the firmware, wherein the one or more constraints comprise a constraint on a memory location which stores the system call being invoked, wherein the constraint on the memory location is a constraint that the memory location is included in a collection of one or more authorized memory locations with respect to the system call; However, Balinsky teaches wherein the metadata comprises one or more constraints comprise a constraint on a memory location which stores the system call being invoked, (¶0016: The policy may include a policy identifier an action associated with system call(s) to be captured (stored), a policy condition (constraint) that the document contents and/or meta data must satisfy for the policy to become applicable, and a policy action that will be implemented if the policy condition is satisfied). wherein the constraint on the memory location is a constraint that the memory location (¶0025: If metadata indicates that the document data is in a publicly known file format then the document data and metadata are automatically passed having stored policies. The appropriate policies are then applied to captured data 210 depending on the result of the parsing) is included in a collection of one or more authorized memory locations with respect to the system call; (¶0018-0019: As seen in Figure 1 and 2, the local device operating system 111 is also coupled to a policy decision engine 109. The policy decision engine 109 determines whether the application 101 transmitting the system call is a known/unauthorized application 200 or unknown/unauthorized application 201. If the application 101 is recognized then its behavior is either known to be safe. ) during execution of the firmware, (¶0008: The application 101 can be any user application(s) or routine (e.g., software, firmware) that can be run by the operating system 111 (e.g., WINDOWS) or used by the local device 100 over the network 161 while running on another computer (e.g., server)). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Balinsky regarding constraint to the method Monastyrsky in order to prevent data leak prevention (Balinsky: ¶0001-0002 & 0006). Monastyrsky in view of Balinsky does not disclose: a constraint that the memory location is included in a collection of one or more authorized memory locations with respect to the system call; wherein the one or more constraints comprise a mapping of one or more memory locations to one or more respective system calls of the firmware, each location having a corresponding mapped memory address; wherein the system calls are allowed to originate only from corresponding mapped addresses of memory locations where the one or more respective system calls are stored. Although, Balinsky does disclose a constraint with respect to memory location and execution of a firmware but does not explicitly disclose collection of one or more authorized memory locations. However, Rajagopalan teaches a constraint that the memory location is included in a collection of one or more authorized memory locations with respect to the system call; (Pages 217-218 , 2.1 Policy Expressiveness: In system call monitoring, each system call in a program has an associated system call policy that specifies properties that must be satisfied when the call is executed. The program’s overall policy is the collection of its system call policies. In principle, a system call monitor should be able to enforce any computable policy…The properties expressed by a system call policy can be viewed as constraints on the execution of the system call. A typical policy, for example, may require that a system call be constrained to a specific system call number (name), or must be invoked from a particular memory address in the program (call site), or both. ); wherein the one or more constraints comprise a mapping of one or more memory locations to one or more respective system calls of the firmware, each location having a corresponding mapped memory address; (Page 217-218, 2.1 Policy Expressiveness: System call policies can also constrain more global behaviors, such as the acceptable order of system call executions. For example, the collection of system call policies for a program might constrain the application’s system call trace to be a path in the call graph. In this case, each system call policy could include a list of system calls that are possible predecessors for the given call. ) wherein the system calls are allowed to originate only from corresponding mapped addresses of memory locations where the one or more respective system calls are stored. (Pages 221-222, 3.4 System Call Checking: The kernel enforces an application’s system call policies at runtime. When an authenticated system call occurs, the kernel receives the normal arguments of the system call—the system call number and the arguments to the original unmodified call—and the five additional arguments—the policy descriptor (polDes), the block number of the system call (blockID), the set of predecessors stored as an authenticated string (predSet), a pointer (lbPtr) to the lastBlock policy state and last block MAC (lbMAC), and the call MAC (callMAC). Furthermore, it can determine the call site based on the return address of the kernel interrupt handler.); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Rajagopalan regarding a constraint that the memory location is included in a collection of one or more authorized memory location with respect to the system call to the method of Monastyrsky in view of Balinsky in order to mitigate attacks such as buffer overflow. (Rajagopalan: Page 217-218). With respect to claim 24, the combination of Monastyrsky in view of Balinsky and Rajagopalan teaches the method of claim 1 (see rejection of claim 1 above), wherein the collection of one or more authorized memory locations with respect to the system call comprises one or more authorized memory addresses. (Rajagopalan Page 222 4.1 Implementation Overview: PLTO is an optimization tool, and, as a result, it requires relocatable binaries (i.e., binaries in which the locations of addresses are marked), so that addresses can be adjusted as code transformations move data and code locations. Our installer currently inherits this requirement, although it should be straightforward to generate policies for binaries without relocation information. One impact of this restriction is that the binaries we test in the next section had to be compiled from source, since binaries shipped with standard Linux and Unix distributions do not contain relocation information. Note that our installer outputs nonrelocatable statically linked binaries, since our policies include th absolute locations (authorized memory locations) of all system calls. Further in Figure 3 (as seen in page 224) provides the results of generating ASC policies for four programs: the three from above and tar, the Unix archiving program. The sites column indicates the number of separate system call locations in the program, calls indicates the number of different system calls, and args gives the total number of arguments (not including the system call number) from all the call sites. The auth column lists the number of arguments that could be determined by the static analysis done by the installer and that could be authenticated by the basic approach.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Rajagopalan regarding a constraint that the memory location is included in a collection of one or more authorized memory location with respect to the system call to the method of Monastyrsky in view of Balinsky in order to mitigate attacks such as buffer overflow. (Rajagopalan: Page 217-218). Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Monastyrsky et al. (US PGPub No.20200210591-A1) in view of Balinsky et al. (US PGPub No. 20130219453-A1), Rajagopalan et al. ("System Call Monitoring Using Authenticated System Calls", (Year:2006)) and Yamamura et al. (US PGPub No.20170344406-A1). With respect to claim 6, the combination of Monastyrsky in view of Balinsky and Rajagopalan teaches the method of claim 5 (see rejection of claim 5 above) but does not teach wherein the constraint on the calling function that invoked the system call is a temporal constraint. However, Yamamura teaches wherein the constraint on the calling function that invoked the system call is a temporal constraint. (¶0068-0069:when the file access manager reading of the file through the system call, such as “read”, the reader reads the file from storing device and transmits the file through the file access manager to the application having the called the system call. More specifically the reader writes the data from the storing device in a memory area set by the application at the time of the system call (temporal)); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Yamamura regarding the constraint on the calling function that is invoked by the system call is a temporal constraint to the method Monastyrsky in view of Balinsky and Rajagopalan order to reduce the load on or delay of processing (Yamaura: ¶0023-0026). Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Monastyrsky et al. (US PGPub No.20200210591-A1) in view of Balinsky et al. (US PGPub No. 20130219453-A1), Rajagopalan et al. ("System Call Monitoring Using Authenticated System Calls", (Year:2006)), Guidry et al. (US PGPub No.20160357958-A1), and Yavuz et al. (US PGPub No.20200380124-A1). With respect to claim 11, the combination of Monastyrsky in view of Balinsky and Rajagopalan teaches the method of claim 1 (see rejection of claim 1 above), wherein the metadata is determined using dynamic analysis, wherein said dynamic analysis comprises: (Monastyrsky: ¶0071 the analysis module uses heuristic analysis and Figure 3 340 illustrates Log generation). Monastyrsky in view of Balinsky and Rajagopalan discloses the subject above as discussed above but does not disclose: obtaining a log generated during a training phase of the firmware in which the firmware is executed, wherein during the training phase of the firmware, states and activities of the firmware are continuously polled and recorded in the log, wherein during the training phase, the firmware is tested with a plurality of system call events; correlating results of the training phase with states and activities recorded in the log to determine normal and abnormal behavior of the firmware during the plurality of system call events; and based on said correlating, defining the one or more constraints on the execution of the system call. However, Guidry teaches wherein during the training phase [of the firmware,] states and activities of the firmware are continuously polled and recorded in the log, (¶0084: the security system analyzes one or more applications in one embodiment develop a list of predefined execution paths for a process such as application. In one embodiment, the security system analyzes each application during a training phase to log or otherwise develop a list of authorized execution paths.); wherein during the training phase, the firmware is tested with a plurality of system call (¶0084: the security system analyzes one or more applications in one embodiment to develop a list of predefined execution for a process such as application) events; (¶0084-0085: The security system responds to an event notification by analyzing to determine an execution path associated with event. The security system may determine the patent function for each function beginning with a current function such as the thread.); correlating results of the training phase with states and activities recorded in the log to determine normal and abnormal behavior of the firmware during the plurality of system call events; (¶0085-0086: The security system compares the identified execution path with the predefined execution paths to determine if the execution is legitimate.); and based on said correlating, defining the one or more constraints on the execution of the system call (¶0086: a process of analyzing an application to generate a list of predefined or legitimate execution paths used by the security system to identify malicious modification of code. In one embodiment the process of Figure 8 can be used to generate a list of predefined and/or whitelisted paths used in the process of Figure 7. At step 352, the security system accesses the application to be analyzed. At step 354 the security system the application viable execution factors (constraints) ). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Guidry with regards to the training phase to the method Monastyrsky in view of Balinsky and Rajagopalan in order protect the client’s system from exploits of memory-corruption vulnerabilities (Guidry: ¶0042-0044). Monastyrsky in view of Balinsky, Rajagopalan, and Guidry teaches with regards to the subject above but does not disclose: during a training phase of the firmware in which the firmware is executed, However, Yavuz teaches during a training phase of the firmware in which the firmware is executed, (¶0018: a firmware analysis using symbolic execution in accordance with an embodiment of the present disclosure can, during a training phase learn a protocol model from known firmware, apply the model to recognize protocol relevant fields, and automatically detect functionality within unknown firmware) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Yavuz with regards to the firmware to the method Monastyrsky in view of Balinsky, Harel, Rajagopalan, and Guidry in order protect the client’s system from exploits of memory-corruption vulnerabilities (Yavuz: ¶0042-0044). Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Monastyrsky et al. (US PGPub No.20200210591-A1) in view of Balinsky et al. (US PGPub No. 20130219453-A1), Rajagopalan et al. ("System Call Monitoring Using Authenticated System Calls", (Year:2006)), Guidry et al. (US PGPub No.20160357958-A1), Yavuz et al. (US PGPub No.20200380124-A1), and Trivellato et al. (US PGPub No.20200412758-A1). With respect to claim 12, the combination of Monastyrsky in view of Balinsky, Rajagopalan, Guidry, and Yavuz teaches the method of claim 11 (see rejection of claim 11 above) comprising: based on said correlating, mapping one or more abnormal behaviors of the firmware(Monastyrsky: ¶0077: describes an event relating to conditions that try to exploit vulnerabilities but does not disclose the association with one or more attack types.) [with one or more associated attack types utilized for testing] the firmware during the training phase; and (Yavuz: ¶0089-0091: firmware analysis system and method of the present disclosure utilizes source files of the firmware to map program variables into protocol fiends during the training phase) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Yavuz with regards to the firmware to the method Monastyrsky in view of Balinsky, Rajagopalan, and Guidry in order protect the client’s system from exploits of memory-corruption vulnerabilities (Yavuz: ¶0042-0044). based on said mapping the one or more abnormal behaviors,[ determining a risk score of the system call event.] ( Guidry: ¶0034: may analyze a during a training phase to develop an outline of an application); ¶0085: the security system compares the identified execution path with the predefined execution path with the predefined execution paths) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Guidry with regards to the training phase to the method Monastyrsky in view of Balinsky, Rajagopalan, and Yavuz in order protect the client’s system from exploits of memory-corruption vulnerabilities (Guidry: ¶0042-0044). Monastyrsky in view of Balinsky, Rajagopalan, Guidry, and Yavuz does not disclose: with one or more associated attack types utilized for testing the firmware during the training phase based on said mapping the one or more abnormal behaviors, determining a risk score of the system call event. However, Trivellato teaches with one or more associated attack types utilized for testing the firmware during the training phase (¶0090-0091: cyber-attack likelihood value and cyber-attack impact value associated with the entity are determined. The cyber-attack likelihood value can be based on one or more alerts, one or more vulnerabilities, direct connectivity with a public entity or host, proximity to compromised or vulnerable entities. The alerts that may be used for determining a cyber-attack likelihood can be from multiple categories including, but not limited to, exposures, reconnaissance, exploitation, internal activity/lateral movement, command and control, and execution). based on said mapping the one or more abnormal behaviors, determining a risk score of the system call event. (¶0018-0019: the type of risk used in computing in computing the risk score, can include cyber-security or cyber-attack risk and operational failure risk. The cyber-attack likelihood factor can include alerts, vulnerabilities, direct connectivity with a public entity, and proximity to infected/vulnerable entities.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Trivellato with regards to attack types and risk score to the method Monastyrsky in view of Balinsky, Rajagopalan, Guidry, and Yavuz in order assist in the prioritization of risks that are more dangerous to the client’s system (Trivellato: ¶0014-0017). Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Monastyrsky et al. (US PGPub No.20200210591-A1) in view of Balinsky et al. (US PGPub No. 20130219453-A1), Rajagopalan et al. ("System Call Monitoring Using Authenticated System Calls", (Year:2006)), Melski et al. (US PGPub No. 20160026791-A1), and Yavuz et al. (US PGPub No. 20200380124-A1) . With respect to claim 13, the combination of Monastyrsky in view of Balinsky, Harel, and Sallam teaches the method of claim 1 (see rejection of claim 1 above) wherein the metadata comprises a second collection of one or more memory locations, the second collection is a collection of one or more respective calling functions that are configured to call the system call wherein the one or more constraints on execution of the system call [is based on the second collection] (Monastyrsky: ¶0038 illustrates the aggregation of data upon execution of the process). wherein said determining that the system call event violates the one or more constraints on the execution of the system call comprises determining that the system call is called by a function that is not listed in the second collection (Monastyrsky: ¶0038 indicates the security module verdict is dependent of the data). Monastyrsky in view of Balinsky, Harel, and Sallam discloses the subject as discussed above but does not disclose: wherein the metadata comprises a second collection of one or more memory locations, the second collection is a collection of one or more respective calling functions that are configured to call the system call wherein the one or more constraints on execution of the system call is based on the second collection the execution of the system call comprises determining that the system call is called by a function that is not listed in the second collection However, Melski teaches wherein the metadata comprises a second collection of one or more memory locations, the second collection is a collection of one or more respective calling functions that are configured to call the system call (¶0065-0067: instructions with a potentially unsafe memory access in the subject application a customized sequence (second collection) may be used to check if the instruction is safe). wherein the one or more constraints on execution of the system call is based on the second collection (¶0067: Instructions may be identified dangerous based upon a predetermined list of potentially dangerous instruction types, predetermined parameters (constraints) of parameters values identified as potentially dangerous). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Melski regarding a second collection to the method Monastyrsky in view of Balinsky and Rajagopalan in order to further protect the client’s system from exploitation of the memory and prevents unauthorized programs to violate the client’s system confidentiality and to helps correctly execute the application program (Melski: ¶0004-0005). Monastyrsky in view of Balinsky, Rajagopalan, and Melski discloses the subject as discussed above but does not disclose: the execution of the system call comprises determining that the system call is called by a function that is not listed in the second collection However, Ruiz teaches the execution of the system call comprises determining that the system call is called by a function that is not listed in the second collection (¶0019-0020: the functionality may be defined as a browser function that includes a function call, or a subroutine, to check an access control list (the second collection). For example, all custom functions contained in a browser corresponding one or more parameters in the access control list). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Ruiz with regards function not listed in the second collection to the method Monastyrsky in view of Balinsky, Rajagopalan, and Melski in order to further protect the client’s system from exposure and prevents untrusted entities to access the system (Ruiz: ¶0003). Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Monastyrsky et al. (US PGPub No.20200210591-A1) in view of Balinsky et al. (US PGPub No. 20130219453-A1), Rajagopalan et al. ("System Call Monitoring Using Authenticated System Calls", (Year:2006)), and Eacmen et al. (US PGPub No. 20190294802-A1) . With respect to claim 14, the combination of Monastyrsky in view of Balinsky and Rajagopalan teaches the method of claim 1 (see rejection of claim 1 above), wherein the responsive action comprises at least one of: terminating the system call event; terminating a process executing the system call; and reporting the system call event, ( Monastyrsky: ¶0046: during the event of interception (constraints being met) the execution is halted (terminated) and ¶0045 further illustrates upon interception one of the events is a notification from the OS). Monastyrsky in view of Balinsky and Rajagopalan disclose the subject above as discussed above but does not disclose: wherein the method comprises determining a risk score of the system call event, wherein said performing the responsive action is performed based on the risk score of the system call event being above a threshold. However, Eacmen teaches wherein the method comprises determining a risk score of the system call event (¶0050 describes that based on vulnerability and determines the risk level of the analysis); wherein said performing the responsive action is performed based on the risk score of the system call event being above a threshold (¶0037 the remediation module has multiple actions which are based on the level of risk of an executable). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teaching of Eacmen to the system of Monastyrsky in view of Balinsky and Rajagopalan to gauge to risk of the system in order to detect threats against components of a system (Eacmen ¶0002). Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Monastyrsky et al. (US PGPub No.20200210591-A1) in view of Balinsky et al. (US PGPub No. 20130219453-A1), Rajagopalan et al. ("System Call Monitoring Using Authenticated System Calls", (Year:2006)), and Atighetchi et al. (US PG Pub No. 20200252418-A1). With respect to claim 21 the combination of Monastyrsky in view of Balinsky and Rajagopalan teaches the method of claim 1 (see rejection of claim 1 above), but does not disclose wherein the firmware is a low-level software component that can directly access hardware of the embedded system. However, Atighetchi wherein the firmware is a low-level software component that can directly access hardware of the embedded system. (¶0073: In some embodiments, the system may be observed during a plurality of discrete temporal system states in an emulation environment. Firmware may be emulated in this manner, as it may be more readily scalable for generating data across a large number of different firmware types used in embedded systems. In yet another example, an emulator that performs virtualization (e.g., QEMU) may generate log files about low-level observables being generated by the virtualization, and may add instrumentation to the code base to log firmware state with increased granularity.); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Atighetchi regarding low level software component to the method Monastyrsky in view of Balinsky and Rajagopalan in order to reduce the security risk of in-memory attacks and while being reducing resource intensity (Atighetchi: ¶0004-0006). Claim 22 is rejected under 35 U.S.C. 103 as being unpatentable over Monastyrsky et al. (US PGPub No.20200210591-A1) in view of Balinsky et al. (US PGPub No. 20130219453-A1), Rajagopalan et al. ("System Call Monitoring Using Authenticated System Calls", (Year:2006)), and Conikee et al. (US PGPub No. 20190108342-A1). With respect to claim 22, the combination of Monastyrsky in view of Balinsky and Rajagopalan teaches the method of claim 1 (see rejection of claim 1 above) wherein said identifying the system call event and (Monastyrsky: ¶0060-0064: in the event of the activation of a trigger the interceptor the analyzes the stack of the process created upon opening the file and identifies the sequence of function calls preceding event wherein identifies the sequence of function calls). Monastyrsky in view of Balinsky and Rajagopalan disclose the subject as discussed above but does not disclose: and said determining that the system call event violates the one or more constraints are performed by a runtime agent, wherein the runtime agent is configured to identify attacks that are based on memory corruption on a trusted system, wherein the trusted system includes a trusted application binary that is stored on device and trusted shared libraries used by the trusted application binary. However, Conikee and said determining that the system call event violates the one or more constraints are performed by a runtime agent, (¶0019: the runtime agent on execution on the application preferably includes monitoring the execution flow, which comprises of monitoring utilization of application controls through the execution of the application; detecting a security event which comprises identifying a section of the execution flow as a potential threat; and regulating the execution flow to prevent or ameliorate the security threat). wherein the runtime agent is configured to identify attacks that are based on memory corruption on a trusted system, (¶0019: securing an application through an application-aware runtime agent of preferred embodiment can include: acquiring a code profile which is generated using a code analysis engine (trusted system) instrumenting the application with a runtime agent according to the code profile enforcing the runtime agent of the execution of the application and optionally responding to the runtime agent. The method functions to apply code analysis to instrumentation of the application). wherein the trusted system includes a trusted application binary that is stored on device (¶0074-0075: The code profile is preferably generated using a code analysis engine. ¶0021: in which the code profile is preferably generated for particular scope i.e., a subject of the application source code. and trusted shared libraries used by the trusted application binary. (¶0023: The code profile (trusted system) can preferably be broken down into a set of application controls that in combination may encapsulate the attack surface of the code, wherein the attack surface describes region of source code that may introduce security vulnerabilities into the application. The code profile with a set of controls is provided for the application (e.g., for commonly used library or web application) (trusted shared libraries)) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Conikee with regards the trusted system and runtime agent to the method of Monastyrsky in view of Balinsky and Rajagopalan in order to leverage code analysis in combination with runtime behavior detection to achieve higher precision analysis result through validating conditions in runtime (Conikee: ¶0015). Claim 23 is rejected under 35 U.S.C. 103 as being unpatentable over Monastyrsky et al. (US PGPub No.20200210591-A1) in view of Balinsky et al. (US PGPub No. 20130219453-A1), Rajagopalan et al. ("System Call Monitoring Using Authenticated System Calls", (Year:2006)) , and Suwad et al. (US PGPub No. 20210089647-A1) . With respect to claim 23, the combination of Monastyrsky in view of Balinsky and Rajagopalan teaches the method of claim 1 (see rejection of claim 1 above) wherein the collection of one or more authorized memory locations with respect to the system call comprises a first memory location in which the system call is invoked, (Rajagopalan Page 222 4.1 Implementation Overview: PLTO is an optimization tool, and, as a result, it requires relocatable binaries (i.e., binaries in which the locations of addresses are marked), so that addresses can be adjusted as code transformations move data and code locations. Our installer currently inherits this requirement, although it should be straightforward to generate policies for binaries without relocation information. One impact of this restriction is that the binaries we test in the next section had to be compiled from source, since binaries shipped with standard Linux and Unix distributions do not contain relocation information. Note that our installer outputs nonrelocatable statically linked binaries, since our policies include th absolute locations (authorized memory locations) of all system calls. Further in Figure 3 (as seen in page 224) provides the results of generating ASC policies for four programs: the three from above and tar, the Unix archiving program. The sites column indicates the number of separate system call locations in the program, calls indicates the number of different system calls, and args gives the total number of arguments (not including the system call number) from all the call sites. The auth column lists the number of arguments that could be determined by the static analysis done by the installer and that could be authenticated by the basic approach.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Rajagopalan regarding a constraint that the memory location is included in a collection of one or more authorized memory location with respect to the system call to the method of Monastyrsky in view of Balinsky in order to mitigate attacks such as buffer overflow. (Rajagopalan: Page 217-218). Monastyrsky in view of Balinsky and Rajagopalan does not disclose: wherein the second collection comprises a second memory location of a calling function in which the system call is invoked, the first memory location is located within the calling function, whereby an attack that utilizes the first memory location to invoke the system call without calling the calling function is determined as a violation of the one or more constraints. However, Suwad teaches wherein the second collection comprises a second memory location of a calling function in which the system call is invoked, (¶0293: The collection here will be done to system calls, which will lead to catching every critical operation. The parses then processes collected system calls extracting needed information such as system call name, and file name. The decision maker starts checking user processes that initiates system calls and consulting with the reference model (i.e., the critical assets scope of control).). the first memory location is located within the calling function, ( ¶0355: Figure 19 shows installing the hook upon knowing the address space of the NtCreateFile system call. This enables us to create a virtual memory to store the hook structs and call stack of the function call.). whereby an attack that utilizes the first memory location to invoke the system call without calling the calling function (¶0270: This is the phase responsible for virtual machine introspection by collecting system calls generated by the virtual machine without its knowledge since system calls are intercepted (without calling the calling function) and logged at the hypervisor-level.) is determined as a violation of the one or more constraints. (¶0294: If a violation is detected, a warning message is sent by the decision maker to the tuner in the security system. The tuner can employ different strategies to harden accessibility to critical assets. Some implementations can make the attack surface dynamic using techniques such as bio-inspired MTD, cloud-based MTD, and dynamic network configuration.). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize the teachings of Suwad with regards the second collection to the method Monastyrsky in view of Balinsky and Rajagopalan in order to detect an attacking happening to the system (Suwad: ¶0033-0034). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to TAYLOR P VU whose telephone number is (703)756-1218. The examiner can normally be reached MON - FRI (7:30 - 5:00). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexander Lagor can be reached at (571) 270-5143. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /T.P.V./Examiner, Art Unit 2437 /ALEXANDER LAGOR/Supervisory Patent Examiner, Art Unit 2437
Read full office action

Prosecution Timeline

Oct 04, 2021
Application Filed
Oct 20, 2023
Non-Final Rejection — §103, §112
Mar 26, 2024
Response Filed
Apr 10, 2024
Non-Final Rejection — §103, §112
Aug 13, 2024
Response Filed
Oct 08, 2024
Final Rejection — §103, §112
Feb 27, 2025
Request for Continued Examination
Mar 03, 2025
Response after Non-Final Action
Mar 20, 2025
Non-Final Rejection — §103, §112
Jun 26, 2025
Response Filed
Sep 30, 2025
Final Rejection — §103, §112
Jan 02, 2026
Request for Continued Examination
Jan 15, 2026
Response after Non-Final Action
Feb 10, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12506662
SERVICE PROVISION METHOD, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Dec 23, 2025
Patent 12505223
System & Method for Detecting Vulnerabilities in Cloud-Native Web Applications
2y 5m to grant Granted Dec 23, 2025
Patent 12491837
ELECTRONIC SIGNAL BASED AUTHENTICATION SYSTEM AND METHOD THEREOF
2y 5m to grant Granted Dec 09, 2025
Patent 12411931
FUEL DISPENSER AUTHORIZATION AND CONTROL
2y 5m to grant Granted Sep 09, 2025
Patent 12399979
PROVISIONING A SECURITY COMPONENT FROM A CLOUD HOST TO A GUEST VIRTUAL RESOURCE UNIT
2y 5m to grant Granted Aug 26, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

6-7
Expected OA Rounds
81%
Grant Probability
94%
With Interview (+12.8%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 26 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month