Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Detailed Action
Claims 1, 2, 4, 5, and 12 are amended
Claims 3, 6, 13, 15, and 17-22 are cancelled
Claims 1, 2, 4, 5, 7-12, 14, and 16 are pending
Priority
This application claims the priority under 35 USC 119(a) of EP patent application 22176333.7 filed on May 31, 2022. All foreign priority documents have been received. Therefore, the effective filing date of this application is May 31, 2022.
Response to Arguments
Applicant’s arguments filed on 12/09/2025 have been fully considered.
With respect to the USC 103 rejection for claim 1. Applicant’s representative has argued that KRAEMER-FAN fail to teach the limitation of “monitoring or collecting, by the sandbox system, a behavior of the application (i) in the case in which the risk rating of the application is above the risk rating threshold value or (ii) in the case in which the risk rating of the application is unknown, and evaluating the monitored or collected behavior of the application to obtain an analysis result as to whether or not the application is malware; and reporting the analysis result to the real-time monitor”. Examiner is no longer relying on FAN to teach the limitations of the independent claims. Examiner is now using a new reference LANGTON (US-20160292419-A1) to better teach these limitations.
Additional arguments are moot in view of new grounds of rejection necessitated by the claim amendments.
Claim Objections
Claims 1 and 4 are objected to because they are duplicate. Claims 1 and 4 are both independent claims that recite of a method that perform the same features. Examiner suggests either changing the claim limitations or omitting one of the independent claims. Appropriate correction is required.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is:
“a sandbox system to analyze” in claims 1, 4, and 12
Because this claim limitation(s) is being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it is being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
See specification para. [0054, 0058] for functional support
If applicant does not intend to have this limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claim 2 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 2 recites the limitation " the analysis report". There is insufficient antecedent basis for this limitation in the claim. For the purpose of examination examiner is interpreting this limitation as “the analysis result”. Appropriate correction is required.
Claim limitation “a sandbox system to analyze” invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. The specification states in para. 0054 “A sandbox unit can in one embodiment of the invention be a group of components”. However, there is no description of the sandbox system being implemented using any hardware components. The specification broadly describes of figure 5 consisting of a processor and memory. However, there is no description to show that the processor of figure 5 implements the sandbox system. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 2, 4, 7-12, 14, and 16 are rejected under 35 U.S.C. 101 because they directed to an abstract idea.
Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim recites of A method for threat detection in a computer or computer network, the method comprising: determining that an application is starting at the computer; intercepting the application start; identifying a risk rating of the application; based on the identified risk rating of the application, requesting, by a real-time monitor, a sandbox system to analyze the application (i) in a case in which the risk rating of the application is above a risk rating threshold value or (ii) in a case in which the risk rating of the application is unknown; allowing the application to run after the identification of the risk rating of the application; monitoring or collecting, by the sandbox system, a behavior of the application (i) in the case in which the risk rating of the application is above the risk rating threshold value or (ii) in the case in which the risk rating of the application is unknown, and evaluating the monitored or collected behavior of the application to obtain an analysis result as to whether or not the application is malware; and reporting the analysis result to the real-time monitor.
The limitation of method for threat detection in a computer or computer network, the method comprising: determining that an application is starting at the computer; intercepting the application start, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can be performed in the mind. A user can manually determine an application is starting at the computer and intercept the application start.
The limitation of identifying a risk rating of the application, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can be performed in the mind. A user can manually identify a risk rating.
The limitation of based on the identified risk rating of the application, requesting, by a real-time monitor, a sandbox system to analyze the application (i) in a case in which the risk rating of the application is above a risk rating threshold value or (ii) in a case in which the risk rating of the application is unknown, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can be performed in the mind. A user can manually analyze an application using a sandbox based on a risk rating of an application satisfying a threshold.
The limitation of allowing the application to run after the identification of the risk rating of the application, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can be performed in the mind. A user can manually allow an application to run.
The limitation of monitoring or collecting, by the sandbox system, a behavior of the application (i) in the case in which the risk rating of the application is above the risk rating threshold value or (ii) in the case in which the risk rating of the application is unknown, and evaluating the monitored or collected behavior of the application to obtain an analysis result as to whether or not the application is malware, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can be performed in the mind. A user can manually monitor a behavior of an application in a sandbox system to obtain an analysis result.
The limitation of and reporting the analysis result to the real-time monitor, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can be performed in the mind. A user can manually report analysis results.
This judicial exception is not integrated into a practical application. The claim recites of a limitation of “reporting the analysis result to the real-time monitor”. This limitation is used to generally report analysis results of applications monitored in a sandbox. The limitation does not place any limit on what happens to the application after it is determined to be malware or not. Merely reporting analysis results does not integrate the abstract idea into a practical application. Accordingly, this additional element does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim is not patent eligible.
Claim 2 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of further comprising identifying that the application is malware in the analysis report by one or more of: (i) monitoring the behavior of the application when the application is running and (ii) based on signatures of the application. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually determine that an application is application in the analysis report by monitoring behavior of the application or based on a signature.
Claim 4 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 4 recites of same limitations as seen in claim 1. Therefore, claim 4 is rejected in a similar manner.
Claim 7 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of wherein the identifying the risk rating of the application comprises making a query to one or more of: (i) a risk rating database, and (ii) a reputation database at one or more of: ( i) the computer and (ii) a backend of a threat detection network. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually identify the risk rating of the application by making a query.
Claim 8 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of wherein the identifying one or more of: (i) the risk rating of the application and (ii) whether the application is malware or not is based on input from users of computers of a threat detection network. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually determine the risk rating and if an application is malware based on input from users of a computer.
Claim 9 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of wherein the risk rating of the application is at least in part based on a user decision history for one or more of the application and past applications one or more of: (i) received from users of the computer network and (ii) collected by a backend of a threat detection network. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually determine risk rating of an application based on user decision history.
Claim 10 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of wherein a user decision history received from a user at the computer for the application is reported to a threat detection network. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually determine user decision history received from a user at the computer for the application and report it to a threat detection network.
Claim 11 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of wherein a sensor at the computer is used to intercept one or more of a file, a system configuration value, and network operations called by the application. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually determine a sensor to intercept one or more of a file, a system configuration value, and network operations called by the application.
Claim 12 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of features similar to those of claim 1. Therefore, claim 12 is rejected in a similar manner as in the rejection of claim 1.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. In particular, the claim only recites one additional element of “at least one computer configured to”. The “at least one computer configured to” recited at a high-level of generality (i.e., as a generic computer implementing the system) such that it amounts no more than mere instructions to apply the exception using a generic computer. Mere instructions to apply an exception using a generic computer cannot provide an inventive concept. The claim is not patent eligible.
Claim 14 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of features similar to those of claims 1 and 12. Therefore, claim 14 is rejected in a similar manner as in the rejection of claims 1 and 12.
Claim 16 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. This claim recites of wherein the computer is a network node or an endpoint. Therefore, the limitations of this claim, as drafted, is a process that, under its broadest reasonable interpretation, covers steps that can also be performed in the mind. A user can manually determine the computer is a network node or an endpoint.
The dependent claims 2, 7-12 and 16 are directed to abstract ideas and do not include additional elements that are sufficient to amount to significantly more than the judicial exception. This judicial exception is not integrated into a practical application. Therefore, the claims are not patent eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claims 1, 2, 4, 7, 11, 12, 14, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over KRAEMER (US-20210232685-A1) in view of LANGTON (US-20160292419-A1), hereinafter KRAEMER-LANGTON.
Regarding claim 1, KRAEMER teaches “A method for threat detection in a computer or computer network, the method comprising: determining that an application is starting at the computer; ([KRAEMER, para. 0006] “behavior-based threat detection involves monitoring the execution of a computational unit (e.g., a thread, a process, an application, etc.) and identifying suspicious features”) ([KRAEMER, para. 0025] “the computational entity includes … an application”) ([KRAEMER, para. 0026] “based on first events initiated by a computational entity in a computer system, detecting one or more instances of the computational entity engaging in one or more first behaviors”) intercepting the application start; ([KRAEMER, para. 0030] “the potential ransom ware category assigned to the computational entity is the first category, the modification operations include one or more file modification operations targeting one or more files, and the protection actions include intercepting the file modification operations and creating backup copies of the files before permitting the file Modification operations to modify the files.”) identifying a risk rating of the application; ([KRAEMER, para. 0010] “the security engine may assign a potential ransomware category to the computational entity based on the detected behaviors. … The security engine may then monitor the computational entity for specific behavior corresponding to the assigned ransomware category.”) ([KRAEMER, para. 0062] “A computational entity may be assigned to ransom ware Category A if the entity exhibits behaviors associated with encrypting files, including, without limitation, (1) enumerating a storage device or a directory of a file system, and/or (2) modifying private files”) based on the identified risk rating of the application … ([KRAEMER, para. 0142] “The security engine 100 may notify the data backup and recovery module (which may be embedded within or external to the security engine 100) when to start performing backups of files affected by a potentially malicious application. For example, the security engine 100 can send the notification early, when a sample has been pre-categorized as potentially malicious, but before any data loss occurred. From that point onwards, the data backup and recovery module may monitor file operations performed by the application/process (or process chain) … creating a copy of each file that the monitored application is preparing to modify or delete, and storing the copies in a backup area.”) … allowing the application to run after the identification of the risk rating of the application; and … ([KRAEMER, para. 0010] “If such behavior is detected, the security engine may assign a potential ransomware category to the computational entity based on the detected behaviors … initiate protection actions to protect storage resources of the computer system from potentially malicious behaviors of the computational entity, which may continue to execute. The security engine may then monitor the computational entity”)
However, KRAEMER does not teach “… based on the identified risk rating of the application requesting, by a real-time monitor, a sandbox system to analyze the application (i) in a case in which the risk rating of the application is above a risk rating threshold value or (ii) in a case in which the risk rating of the application is unknown … monitoring or collecting, by the sandbox system, a behavior of the application (i) in the case in which the risk rating of the application is above the risk rating threshold value or (ii) in the case in which the risk rating of the application is unknown, and evaluating the monitored or collected behavior of the application to obtain an analysis result as to whether or not the application is malware; and reporting the analysis result to the real-time monitor.”.
In analogous teaching LANGTON teaches “… based on the identified risk rating of the application requesting, by a real-time monitor, a sandbox system to analyze the application (i) in a case in which the risk rating of the application is above a risk rating threshold value or (ii) in a case in which the risk rating of the application is unknown … monitoring or collecting, by the sandbox system, a behavior of the application (i) in the case in which the risk rating of the application is above the risk rating threshold value or (ii) in the case in which the risk rating of the application is unknown, and evaluating the monitored or collected behavior of the application to obtain an analysis result as to whether or not the application is malware; ([LANGTON para. 0038] “A file may include, for example, an executable file, an application, a program, a document, a driver, a script, or the like.”) ([LANGTON para. 0084] “security device 220 may modify a group of malware scores corresponding to the group of files. For example, security device 220 may associate each file, included in the group of files, with a malware score.”) ([LANGTON para. 0087] “process 800 may include determining whether one or more malware scores, for one or more files, satisfy a threshold (block 820). For example, security device 220 may analyze one or more malware scores, corresponding to one or more files, to determine whether the one or more malware scores satisfy a threshold. For example, security device 220 may compare malware scores, for analyzed files, to a threshold value.”) ([LANGTON para. 0088] “if one or more malware scores, for one or more files, satisfy a threshold (block 820—YES), process 800 may include analyzing the file(s) for malware (block 830). For example, if security device 220 determines that a malware score, for a file, satisfies the threshold, then security device 220 may analyze the file (e.g., individually) for malware. In this way, security device 220 may individually analyze a file for malware when the file has a higher probability of being malware (e.g., as indicated by the malware score)”) ([LANGTON para. 0118] “As used herein, satisfying a threshold may refer to a value being greater than the threshold, more than the threshold”) ([LANGTON para. 0097] “implementations, security device 220 may analyze the additional group of files in a testing environment, such as a sandbox environment. Security device 220 may analyze the additional group of files for malware by executing the additional group of files in the testing environment, and by monitoring the testing environment for behavior indicative of malware. For example, security device 220 may execute each file, in the additional group of files, sequentially or in parallel.”) and reporting the analysis result to the real-time monitor ([LANGTON para. 0112] “As shown by reference number 960, assume that security device 220 analyzes FileH in a sandbox environment, and determines that FileH is malware. Based on this determination, and as shown by reference number 965, security device 220 may perform an action to counteract FileH, determined to be malware. For example, security device 220 may indicate that FileH is malware, may prevent client device(s) 210 from accessing FileH, may notify a device associated with an administrator that FileH is malware, or the like.”) ([LANGTON para. 0073] “For example, security device 220 may provide an indication (e.g., to client device 210, to a device associated with a network administrator, etc.) that the file includes malware. Additionally, or alternatively, security device 220 may prevent one or more client devices 220 from accessing the file”)
Thus, given the teaching of LANGTON, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the teaching of sandboxing applications by LANGTON into the teaching of a method of threat detection in a computer by KRAEMER. One of ordinary skill in the art would have been motivated to do so because LANGTON recognizes the benefits of the need for efficiently detecting malware ([LANGTON, para. 0066] “security device 220 may determine whether the segments include malware in a shorter amount of time as compared to using a single testing environment during different time periods, thereby improving a user experience by making non-malware files available to a user earlier in time.”)
Regarding claim 2, KRAEMER-LANGTON teaches all limitations of claim 1. LANGTON further teaches “further comprising identifying that the application is malware in the analysis report by one or more of: monitoring the behavior of the application when the application is running and based on signatures of the application.” ([LANGTON para. 0112] “As shown by reference number 960, assume that security device 220 analyzes FileH in a sandbox environment, and determines that FileH is malware. Based on this determination, and as shown by reference number 965, security device 220 may perform an action to counteract FileH, determined to be malware. For example, security device 220 may indicate that FileH is malware, may prevent client device(s) 210 from accessing FileH, may notify a device associated with an administrator that FileH is malware, or the like.”) ([LANGTON para. 0073] “For example, security device 220 may provide an indication (e.g., to client device 210, to a device associated with a network administrator, etc.) that the file includes malware. Additionally, or alternatively, security device 220 may prevent one or more client devices 220 from accessing the file”)
The same motivation to modify KRAEMER with LANGTON as in the rejection of claim 1 applies.
Regarding claim 4, this claim is method claim that recites similar features of method claim 1. Therefore, claim 4 is rejected in a similar manner as in the rejection of claim 1.
Regarding claim 7, KRAEMER-LANGTON teaches all limitations of claim 1. KRAEMER further teaches “wherein the identifying the risk rating of the application comprises making a query to one or more of: a risk rating database, and reputation database at one or more of: the computer and a backend of a threat detection network.” ([KRAEMER, para. 0004] “signature-based malware detection involves obtaining a copy of a file that is known to contain threatware, analyzing the static features of the file (e.g., the sequence of bytes contained in the file) to extract a static signature that is characteristic of the threatware, and adding the threatware's static signature to a database (often referred to as a “blacklist”) of known cybersecurity threats. When a user attempts to access (e.g., download, open, or execute) a file, the cybersecurity engine scans the file and extracts the file's static signature. If the file's static signature matches a signature on the blacklist, the cybersecurity engine detects the presence of a threat”)
Regarding claim 11, KRAEMER-LANGTON teaches all limitations of claim 1. KRAEMER further teaches “wherein a sensor at the computer is used to intercept one or more of a file, a system configuration value and network operations called by the application.” ([KRAEMER, para. 0030] “the modification operations include one or more file modification operations targeting one or more backup files, and the protection actions include intercepting the file modification operations and creating backup copies of the backup files before permitting the file modification operations to modify the backup files.”) ([KRAEMER, para. 0052] “The tracking module 110 may monitor events occurring in the computer system and track the behavior of computational entities in the computer system based on the monitored events. A computational entity mar be, for example, a thread, a process, an application, or a related set of two or more threads, processes, and/or applications”).
Regarding claim 12, this claim recites of an arrangement comprising a computer configured to perform the steps of method claim 1. Therefore, claim 12 is rejected in a similar manner as in the rejection of claim 1.
Regarding claim 14, this claim recites of a computer readable medium storing instruction which once executed performs the steps of method claim 1. Therefore, claim 14 is rejected in a similar manner as in the rejection of claim 1.
Regarding claim 16, KRAEMER-LANGTON teaches all limitations of claim 1. KRAEMER further teaches “wherein the computer is a network node or an endpoint.” ([KRAEMER, para. 0050] “The behavioral security engine 100 may be a component of a cybersecurity engine, and its modules may cooperate detect potential threats from threatware, including file-based threats and/or fileless (e.g., streaming) threatware threats, by monitoring and analyzing events (e.g., stream of events) on a computer system (e.g., an endpoint computer system).”) ([KRAEMER, para. 0136] “The techniques described in this Section may be used to enhance a behavioral security engine 100 for endpoint devices (e.g., desktops, laptops, terminals, servers, embedded systems)”)
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over KRAEMER-LANGTON in view of FAN (US-7784098-B1).
Regarding claim 5, KRAEMER-LANGTON teaches all limitations of claim 1. KRAEMER further teaches “…. terminating malware processes” ([KRAEMER, para. 0169] “If the CE's threat score exceeds the threshold, the process 250 proceeds to step 392. At step 392, if a prevention policy is enabled, the behavioral security engine 100 terminates the CE (step 395) and restores the files modified by the CE (step 398).”).
LANGTON further teaches “further comprising, in the case that the analysis result indicated that the application is malware, removing the malware …” ([LANGTON para. 0112] “As shown by reference number 960, assume that security device 220 analyzes FileH in a sandbox environment, and determines that FileH is malware. Based on this determination, and as shown by reference number 965, security device 220 may perform an action to counteract FileH, determined to be malware. For example, security device 220 may indicate that FileH is malware, may prevent client device(s) 210 from accessing FileH, may notify a device associated with an administrator that FileH is malware, or the like.”) ([LANGTON para. 0073] “For example, security device 220 may provide an indication (e.g., to client device 210, to a device associated with a network administrator, etc.) that the file includes malware. Additionally, or alternatively, security device 220 may prevent one or more client devices 220 from accessing the file”)
The same motivation to modify KRAEMER with LANGTON as in the rejection of claim 1 applies.
However, KRAEMER-LANGTON does not teach “… and deleting registry values pointing to malware components and files.”
In analogous teaching FAN teaches “… and deleting registry values pointing to malware components and files.” ([FAN, col. 12 lines 19-25] “Further, new or modified registry entries from the restore point log are compared to the malware report to see if that particular malware also creates or modifies registry entries in the same fashion. In one embodiment, all of the restore point logs are compared to a single malware report first and then the logs are compared to the next malware report”) ([FAN, col. 7 lines 44-46] “Suitable snapshot/restore applications are known in the art and will typically restore such information as the registry, profiles, caches, files with certain extensions, etc.”).
Thus, given the teaching of FAN, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the teaching of deleting registry by FAN into the teaching of a method of threat detection in a computer by KRAEMER-LANGTON. One of ordinary skill in the art would have been motivated to do so because FAN recognizes the need to efficiently recover a computer system ([FAN, col. 2 lines 21-24] “it is desirable to have a system and technique that would address the above deficiencies in the prior art and would allow a computer system to recover properly and with minimal effort after being infected by malicious software.”) ([FAN, col. 2 lines 31-34] “a technique is disclosed that provides a prediction of the point in time when malicious software begins to infect a computer system and recommends the best point from which to restore the computer system.”)
Claims 8-10 are rejected under 35 U.S.C. 103 as being unpatentable over KRAEMER-LANGTON in view of SIFFORD (US-20180255073-A1).
Regarding claim 8, KRAEMER-LANGTON teaches all limitations of claim 1. However, KRAEMER-LANGTON does not teach “wherein the identifying one or more of the risk rating of the application and whether the application is malware or not is based on input from the users of the computers of a threat detection network.”.
In analogous teaching SIFFORD teaches “wherein the identifying one or more of the risk rating of the application and whether the application is malware or not is based on input from the users of the computers of a threat detection network.” ([SIFFORD, para. 0051] “the system may be configured to receive an indication from the user computing device indicating that the electronic file is malware. In response, the system may initiate an intrusion detection protocol configured to deny the electronic file access to the third network device based on at least receiving the indication that the electronic file is malware. In addition, the system may initiate a control signal configured to cause the third hash value to be added to the database”) ([SIFFORD, para. 0030] “The user device 104 may refer to any computerized apparatus that can be configured to perform any one or more of the functions of the user device 104 described and/or contemplated herein. For example, the user may use the user device 104 to transmit and/or receive information or commands to and from the detection system 108”).
Thus, given the teaching of SIFFORD, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the teaching of user input for threat detection by SIFFORD into the teaching of a method of threat detection in a computer by KRAEMER-LANGTON. One of ordinary skill in the art would have been motivated to do so because SIFFORD recognizes the need to improve detection of threats ([SIFFORD, para. 0001] “There is a need for enhanced detection of such polymorphic malicious content as it propagates through the network of an entity.”).
Regarding claim 9, KRAEMER-LANGTON teaches all limitations of claim 1. However, KRAEMER-LANGTON does not teach “wherein the risk rating of the application is at least in part based on a user decision history, for one or more of the application and past applications one or more of received from users of the computer network and collected by a backend of a threat detection network.”.
In analogous teaching SIFFORD teaches “wherein the risk rating of the application is at least in part based on a user decision history, for one or more of the application and past applications one or more of received from users of the computer network and collected by a backend of a threat detection network.” ([SIFFORD, para. 0051] “the system may be configured to receive an indication from the user computing device indicating that the electronic file is malware. In response, the system may initiate an intrusion detection protocol configured to deny the electronic file access to the third network device based on at least receiving the indication that the electronic file is malware. In addition, the system may initiate a control signal configured to cause the third hash value to be added to the database”) ([SIFFORD, para. 0050] “the system may be configured to determine that the electronic file is malware based on at least determining a match between the third hash value and at least one of the one or more hash value states in the database”) ([SIFFORD, para. 0030] “The user device 104 may refer to any computerized apparatus that can be configured to perform any one or more of the functions of the user device 104 described and/or contemplated herein. For example, the user may use the user device 104 to transmit and/or receive information or commands to and from the detection system 108”).
The same motivation to modify KRAEMER-LANGTON with SIFFORD as in the rejection of claim 8 applies.
Regarding claim 10, KRAEMER-LANGTON teaches all limitations of claim 1. However, KRAEMER-LANGTON does not teach “wherein a user decision history received from a user at the computer for the application is reported to a threat detection network.”.
In analogous teaching SIFFORD teaches “wherein a user decision history received from a user at the computer for the application is reported to a threat detection network.” ([SIFFORD, para. 0050] “the process flow includes initiating a control signal configured to store the one or more hash value states in a database associated with the third network device.”) ([SIFFORD, para. 0051] “the system may be configured to receive an indication from the user computing device indicating that the electronic file is malware. In response, the system may initiate an intrusion detection protocol configured to deny the electronic file access to the third network device based on at least receiving the indication that the electronic file is malware. In addition, the system may initiate a control signal configured to cause the third hash value to be added to the database”) ([SIFFORD, para. 0028] “the detection system 108 is operatively coupled, via a network 101 to the user device 104 … the detection system 108 can send information to and receive information from the user device 104”) ([SIFFORD, para. 0038] “the detection system 108 comprises … includes data storage 138 for storing data related to the system environment 100, but not limited to data created and/or used by the application 142”).
The same motivation to modify KRAEMER-LANGTON with SIFFORD as in the rejection of claim 8 applies.
Pertinent Art
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
SCHWARTZ (US-20230177144-A1): This prior art teaches of a system for inoculating a computer network against malware is described. Specifically, environmental indicators used by anti-analysis and target filtering mechanisms of a malware program may be determined based on analysis within a virtual or physical sandbox environment. The environmental indicators may be sent to computing devices associated with the computing network. The malware program, based on the environmental indicators, may be spoofed to assume that a computing device is associated with an anti-malware system, and/or is a device that is not to be infected. Based on this assumption, the malware program may not execute within the computing device.
HUTTON (US-9330264-B1): This prior art teaches of a system and method for calculating a risk assessment for an electronic file is described. A database of checks, organized into categories, can be used to scan electronic files. The categories of checks can include weights assigned to them. An analyzer analyses electronic files using the checks. Issues identified by the analyzer can be weighted using the weights to determine a risk assessment for the electronic file.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AFAQ ALI whose telephone number is (571)272-1571. The examiner can normally be reached Mon - Fri 7:30am - 5:30pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ALI SHAYANFAR can be reached at (571) 270-1050 The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.A./
03/05/2026
/AFAQ ALI/Examiner, Art Unit 2434
/NOURA ZOUBAIR/Primary Examiner, Art Unit 2434