DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
2. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on December 17, 2025 has been entered.
Response to Amendment
3. Claims 1, 11, 16 and 17 have been amended. Claims 15 and 24 are canceled. Claims 25 and 26 are , new claims. Claims 1, 4-11, 14, 16-23, 25 and 26 were presented for examination.
Claim Rejections - 35 USC § 103
4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
6. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
7. Claims 1,4, 5, 7-11, 14, 17-21, 23 and 26 are rejected under 35 U.S.C. § 103 as being unpatentable over Kopp et al. (US 2018/0375884 A1), hereafter Kopp, in view of Thomas et al. (US 2023/0308460 A1), hereafter Thomas.
Noted that indicates what the cited art does not teach.
Regarding claim 1, Kopp teaches a method comprising steps of: {Kopp [Para. 0009] “A method for detecting certain user behavior activities of interest may be performed by a server in a network or outside of a network.”}
monitoring user traffic in a cloud-based system; {Kopp [Para. 0009] “The server monitors network traffic relating to user behavior activities in the network.” [Para. 0010] “Moreover, user behavior activities may include actions performed by software running in a virtualized user space in a data center/cloud computing environment.”}
performing an inspection of the user traffic to determine if the user traffic includes malicious content; {Kopp [Para. 0024] “The detection server retrieves historical network traffic stored in firewall 160 or obtains network traffic in real-time from firewall 160. At 206, the detection server is configured to identify a subset of the network traffic in the transactions as traffic suspected of relating to certain user behavior activities of interest. For example, the certain user behavior activities may include malware activities including network traffic related to IP address checks, a destination with low popularity, TOR web browser usage,…software updating, downloading of graphical images, etc.”}
assigning a label to content of the user traffic, the label identifying the content as having any of a full match, a partial match, or no match to malicious content based on the inspection; {Kopp [Para. 0012] “Each suspicious user behavior activity can be regarded as a weak IoC (Indicators of Compromise), representing different events in the traffic, but alone is not sufficient to trigger a security incident.” [Para. 0026] “At 210, the detection server is configured to assign a subset of the network traffic in the transaction into one or more groups based on one or more types of certain user behavior activities of interest. For example, in the case of malware user behavior activities, the subset of the network traffic in the transaction may be assigned to one or more malware groups of click-fraud, ad-injector, information stealer, banking Trojan, exfiltration, or any other known or later-developed malware types.” [Para. 0037] “Rules extracted according to the techniques disclosed herein identify and classify malware, and provide description of malware behavior.”} Kopp detects and categorizes malicious user activities by malware group.
and performing any of blocking the content, allowing the content, {Kopp [Para. 0030] “After the system detects certain user behavior activities of interest in the network traffic, at 218, the detection server is configured to take security measures in response to the detection. For example, in the case of malignant user behavior activities, the detection server may configure a firewall to block the network traffic it deems malicious.”}
and storing a context entry of the content based on the label assigned to the content, wherein the context entry is maintained for a period of time and, {Kopp [Para. 0009] “The server stores data representing network traffic within a plurality of time periods. Each of the time periods serves as a transaction such that data for each of a plurality of transactions comprising one or more user behavior activities is stored over time. Subsets of the network traffic in the transactions as traffic suspected of relating to the certain user behavior activities are identified. The server assigns the subsets of the network traffic in the transactions into one or more groups based on one or more types of certain user behavior activities. The server determines one or more detection rules for each of the one or more groups based on identifying, for each of the groups, a number of user behavior activities common to each of the subsets of the network traffic.” [Para. 0017] “The memory 126 may store malware intelligence data, such as policies or rules for network security and/or identifying malicious user behavior activities.”} Also see para. 0024.
during a subsequent session of a user, is used in combination with inspected content to determine that a combination indicates malicious content.{Kopp [Para. 0030] “At 216, the detection server is configured to use the one or more detection rules determined at 214 to monitor future network traffic in the network to detect occurrence of certain user behavior activities of interest in the network. Continuing with the banking Trojan example above, when monitoring future network traffic, the detection server can determine that an intrusion of a banking Trojan has happened if the detection server 120 detects that user behavior activities A, B, C, D are included in the network traffic from and to network 110.”}
However, Kopp does not teach assigning a label to content of the user traffic, the label identifying the content as having any of a full match, a partial match, or no match to malicious content based on the inspection.
However, Thomas teaches assigning a label to content of the user traffic, the label identifying the content as having any of a full match, a partial match, or no match to malicious content based on the inspection; {Thomas [Para. 0065] “The security management facility 122 may include functionality to scan applications, files, and data for malicious code. Scanning may use any of a variety of techniques, including without limitation signatures, identities, classifiers, and other suitable scanning techniques. The scanning may include scanning some or all files on a periodic basis, scanning data transmitted to or from a device.” [Para. 0122] “Classifications for documents may be specified. Classifications may include any suitable classification for a document that will enable decisions based on the classification.” [Para. 0127] “For example, signatures and rules may be used to determine document matches.” [Para. 0128] “The recognition model may be used to classify documents, for example, in the case of a feature vector model, by determining a feature vector for a given document or portion of a document, and matching it (e.g., an exact match or within a threshold distance) to one of the feature vectors. The classification of the exact match or sufficiently similar document may be assigned to the unclassified document.” [Para. 0129] “The classification for a document may be stored, for example, in the asset repository 805.”} Thomas classifies documents using a recognition model that utilizes partial matching logic.
Thomas is analogous art because each of Kopp and Thomas pertains to monitoring network traffic to detect malware. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kopp to include Thomas’ teaching of the limitations of claim 1, listed above. Doing so would “provide protection from a variety of threats to a variety of compute instances in a variety of locations and network configurations” (Thomas, para. 0054).
Claim 4:
Regarding claim 4, Kopp and Thomas teach the elements of claim 1 as outlined above.
However, Kopp does not teach wherein the context entry includes any of a tenant identity, a user identity, a remote session identifier, and the label associated with the content.
However, Thomas teaches wherein the context entry includes any of a tenant identity, a user identity, a remote session identifier, and the label associated with the content. {Thomas [Para. 0185] “It will also be appreciated that events 1406 and/or event vectors 1410 may usefully be labelled in a variety of ways. While labeling with process identifiers is described above, this may also or instead include an identification of an entity associated with the event 1406 or event vector 1410. In this context, the entity may be any physical, logical, or conceptual entity useful for monitoring activity of compute instances 1402 as described herein. For example, the entity may include a user, a physical device, a virtualized machine,…} Thomas labels an event and the label may include an entity identification (e.g., user identification).
Thomas is analogous art because each of Kopp and Thomas pertains to monitoring network traffic to detect malware. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kopp to include Thomas’ teaching of the limitations of claim 2, listed above. Doing so would “provide protection from a variety of threats to a variety of compute instances in a variety of locations and network configurations” (Thomas, para. 0054).
Claim 5:
Regarding claim 5, Kopp and Thomas teach the elements of claim 1 as outlined above.
Kopp further teaches maintaining the context entry for a period of time. {Kopp [Para. 0009] “The server stores data representing network traffic within a plurality of time periods. Each of the time periods serves as a transaction such that data for each of a plurality of transactions comprising one or more user behavior activities is stored over time. Subsets of the network traffic in the transactions as traffic suspected of relating to the certain user behavior activities are identified. The server assigns the subsets of the network traffic in the transactions into one or more groups based on one or more types of certain user behavior activities. The server determines one or more detection rules for each of the one or more groups based on identifying, for each of the groups, a number of user behavior activities common to each of the subsets of the network traffic.” [Para. 0017] “The memory 126 may store malware intelligence data, such as policies or rules for network security and/or identifying malicious user behavior activities.”}
However, Kopp does not teach storing a context entry of the content in response to a partial match label being assigned to the content.
However, Thomas teaches wherein the steps further comprise: storing a context entry of the content in response to a partial match label being assigned to the content; {Thomas [Para. 0127] “For example, signatures and rules may be used to determine document matches.” [Para. 0128] “The recognition model may be used to classify documents, for example, in the case of a feature vector model, by determining a feature vector for a given document or portion of a document, and matching it (e.g., an exact match or within a threshold distance) to one of the feature vectors. The classification of the exact match or sufficiently similar document may be assigned to the unclassified document.” [Para. 0129] “The classification for a document may be stored, for example, in the asset repository 805.”} Thomas classifies documents using a recognition model that utilizes partial matching logic.
Thomas is analogous art because each of Kopp and Thomas pertains to monitoring network traffic to detect malware. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kopp to include Thomas’ teaching of storing a context entry of content in response to a partial match label being assigned to the content. Doing so would “provide protection from a variety of threats to a variety of compute instances in a variety of locations and network configurations” (Thomas, para. 0054).
Claim 7:
Regarding claim 7, Kopp and Thomas teach the elements of claim 5 as outlined above.
Jensen further teaches wherein the context entry is a first context entry associated with first content, {Kopp [Para. 0009] “Subsets of the network traffic in the transactions as traffic suspected of relating to the certain user behavior activities are identified. The server assigns the subsets of the network traffic in the transactions into one or more groups based on one or more types of certain user behavior activities. The server determines one or more detection rules for each of the one or more groups based on identifying, for each of the groups, a number of user behavior activities common to each of the subsets of the network traffic. The one or more detection rules are used to monitor future network traffic in the network to detect occurrence of the certain user behavior activities.”}
and the steps further comprise: monitoring and inspecting user traffic; {Kopp [Para. 0024] “The detection server retrieves historical network traffic stored in firewall 160 or obtains network traffic in real-time from firewall 160. At 206, the detection server is configured to identify a subset of the network traffic in the transactions as traffic suspected of relating to certain user behavior activities of interest. For example, the certain user behavior activities may include malware activities.”}
labeling second content as a partial match to malicious content; {Kopp [Para. 0026] “At 210, the detection server is configured to assign a subset of the network traffic in the transaction into one or more groups based on one or more types of certain user behavior activities of interest. For example, in the case of malware user behavior activities, the subset of the network traffic in the transaction may be assigned to one or more malware groups of click-fraud, ad-injector, information stealer, banking Trojan, exfiltration, or any other known or later-developed malware types.” [Para. 0037] “Rules extracted according to the techniques disclosed herein identify and classify malware, and provide description of malware behavior.”} Kopp detects and categorizes malicious user activities by malware group.
identifying a combination of the first context entry and the second content as malicious. {Kopp [Para. 0030] “At 216, the detection server is configured to use the one or more detection rules determined at 214 to monitor future network traffic in the network to detect occurrence of certain user behavior activities of interest in the network. Continuing with the banking Trojan example above, when monitoring future network traffic, the detection server can determine that an intrusion of a banking Trojan has happened if the detection server 120 detects that user behavior activities A, B, C, D are included in the network traffic from and to network 110.”}
However, Kopp does not teach labeling a second content as partial match to malicious content.
However, Thomas teaches labeling a second content as partial match to malicious content. {Thomas [Para. 0122] “Classifications for documents may be specified.” [Para. 0127] “For example, signatures and rules may be used to determine document matches.” [Para. 0128] “The recognition model may be used to classify documents, for example, in the case of a feature vector model, by determining a feature vector for a given document or portion of a document, and matching it (e.g., an exact match or within a threshold distance) to one of the feature vectors. The classification of the exact match or sufficiently similar document may be assigned to the unclassified document.”} Thomas classifies documents using a recognition model that utilizes partial matching logic.
Thomas is analogous art because each of Kopp and Thomas pertains to monitoring network traffic to detect malware. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kopp to include Thomas’ teaching of labeling a second content as partial match to malicious content. Doing so would “provide protection from a variety of threats to a variety of compute instances in a variety of locations and network configurations” (Thomas, para. 0054).
Claim 8:
Regarding claim 8, Kopp and Thomas teach the elements of claim 7 as outlined above.
Jensen further teaches wherein the steps further comprise: blocking the first content and the second content; {{Kopp [Para. 0030] “After the system detects certain user behavior activities of interest in the network traffic, at 218, the detection server is configured to take security measures in response to the detection. For example, in the case of malignant user behavior activities, the detection server may configure a firewall to block the network traffic it deems malicious.”} Kopp blocks malicious network traffic.
However, Kopp does not teach sending an alert notifying detection of malware.
However, Thomas teaches sending an alert notifying detection of malware. {Thomas [Para. 0090] “When a threat or other policy violation is detected by the security management facility 122, the remedial action facility 128 may be used to remediate the threat. Remedial action may take a variety of forms, non-limiting examples including… sending a warning to a user or administrator,… or the like to remediate the threat.”} The remedial action facility 128 sends a warning to a user or an administrator when a threat is detected by the security management facility 122.
Thomas is analogous art because each of Kopp and Thomas pertains to monitoring network traffic to detect malware. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kopp to include Thomas’ teaching of sending an alert notifying detection of malware. Doing so would “provide protection from a variety of threats to a variety of compute instances in a variety of locations and network configurations” (Thomas, para. 0054).
Claim 9:
Regarding claim 9, Jensen and Vashisht teach the elements of claim 7 as stated.
Jensen further wherein the second content is monitored during a different session than the first content. {Kopp [Para. 0009] “The server monitors network traffic relating to user behavior activities in the network. The server stores data representing network traffic within a plurality of time periods. Each of the time periods serves as a transaction such that data for each of a plurality of transactions comprising one or more user behavior activities is stored over time.” [Para. 0024] “At 204, the detection server stores the network traffic monitored within a period of time as a transaction. For example, the period of time can be 12 hours, one day, two days, or any other time period.”} Kopp monitors user behavior activities occurring during different time periods.
Claim 10:
Regarding claim 10, Kopp and Thomas teach the elements of claim 7 as outlined above.
Kopp further teaches wherein any number of context entries are combined to identify malicious content. {Kopp [Para. 0011] “Although some activities may be suspicious, they are still not conclusive evidence of malware when presented alone. Therefore, the techniques presented herein determine how to identify and combine such weak Indicators of Compromise (IoCs) to produce conclusive evidence of malware activity in a network.”} Also see paras. 0030 and 0009.
Claim 21:
Regarding claim 21, Kopp and Thomas teach the elements of claim 1 as outlined above.
Kopp further teaches wherein the steps further comprise: comparing the context entry associated with the content to one or more previously stored context entries; {Kopp [Para. 0009] “The server stores data representing network traffic within a plurality of time periods. Subsets of the network traffic in the transactions as traffic suspected of relating to the certain user behavior activities are identified. The server assigns the subsets of the network traffic in the transactions into one or more groups based on one or more types of certain user behavior activities. The server determines one or more detection rules for each of the one or more groups based on identifying, for each of the groups, a number of user behavior activities common to each of the subsets of the network traffic. The one or more detection rules are used to monitor future network traffic in the network to detect occurrence of the certain user behavior activities.”}
identifying the content as malicious based on a combination of the context entry and at least one of the previously stored context entries. {Kopp [Para. 0011] “Although some activities may be suspicious, they are still not conclusive evidence of malware when presented alone. Therefore, the techniques presented herein determine how to identify and combine such weak Indicators of Compromise (IoCs) to produce conclusive evidence of malware activity in a network.”}
Claim 26:
Regarding claim 26, Kopp and Thomas teach the limitations of claim 1 as stated.
Kopp further teaches wherein the context entry is stored in a context database after a session terminates and, during a subsequent session, {Kopp [Para. 0009] “The server stores data representing network traffic within a plurality of time periods. Each of the time periods serves as a transaction such that data for each of a plurality of transactions comprising one or more user behavior activities is stored over time. Subsets of the network traffic in the transactions as traffic suspected of relating to the certain user behavior activities are identified. The server assigns the subsets of the network traffic in the transactions into one or more groups based on one or more types of certain user behavior activities. The server determines one or more detection rules for each of the one or more groups based on identifying, for each of the groups, a number of user behavior activities common to each of the subsets of the network traffic.” [Para. 0017] “The memory 126 may store malware intelligence data, such as policies or rules for network security and/or identifying malicious user behavior activities.”}
during a subsequent session, is matched with a second content associated with the user to identify a full match to a malicious signature across distinct sessions. {Kopp [Para. 0030] “At 216, the detection server is configured to use the one or more detection rules determined at 214 to monitor future network traffic in the network to detect occurrence of certain user behavior activities of interest in the network. Continuing with the banking Trojan example above, when monitoring future network traffic, the detection server can determine that an intrusion of a banking Trojan has happened if the detection server 120 detects that user behavior activities A, B, C, D are included in the network traffic from and to network 110.”} Kopp uses behavior signatures to identify malware based on its actions.
Claims 11, 14, 17-20 and 23:
Regarding claims 11, 14, 17-20 and 23, the claims are directed to a non-transitory computer-readable medium comprising instructions that, when executed, cause a processor to perform the limitations recited by claims 1, 4, 7-10 and 21. Therefore, the rejection applied to claims 1, 4, 7-10 and 21 also applies to claims 11, 14, 17-20 and 23. Claim 1, 4, 7-10 and 21 are rejected under the same rationale as claims 11, 14, 17-20 and 23.
Claim 11 further recites a non-transitory computer-readable medium comprising instructions that, when executed, cause one or more processors to perform steps of claim 1.
{Kopp [Para. 0040] “A non-transitory computer-readable storage media encoded with software comprising computer executable instructions which, when executed by a processor, cause the processor to: monitor network traffic relating to user behavior activities in the network.”}
8. Claims 6 and 16 are rejected under 35 U.S.C. § 103 as being unpatentable over Kopp and Thomas as applied to claims 1, 5 and 11, and in further view of Kolluru (US 2023/0224314 A1), hereafter Kolluru.
Regarding claim 6, Kopp and Thomas teaches the elements of claim 5 as outlined above.
However, Kopp does not teach wherein the context entry is associated with a specific user, and wherein the context entry is maintained after a logout of the user.
However, Thomas teaches wherein the context entry is associated with a specific user. {Thomas [0185] “It will also be appreciated that events 1406 and/or event vectors 1410 may usefully be labelled in a variety of ways. While labeling with process identifiers is described above, this may also or instead include an identification of an entity associated with the event 1406 or event vector 1410. For example, the entity may include a user, a physical device,…” [Para. 0186] “In one aspect, the event vectors 1410 may be organized around entities. Thus for example, a request for access to a network resource may be an event 1406. When such a request is initiated by a user, an event vector 1410 for that user may be created and reported along with other temporally adjacent or otherwise related events 1406 associated with that user.”}
However, Kopp and Thomas do not explicitly teach wherein the context entry is maintained after a logout of the user.
However, Kolluru teaches wherein the context entry is maintained after a logout of the user. {Kolluru [Para. 0003] “The method begins with intercepting API traffic between a client and a server, wherein the API traffic is associated with multiple user sessions. The system then identifies a first user session identifier associated with one of the multiple user sessions, wherein the first user session is associated with a subset of the intercepted API traffic. Correlations are detected between a subset of the API traffic associated with the first user session, and correlation data based on the detected correlation is stored. The system then compares the correlation data to subsequently intercepted API traffic associated with a second user session, and determines whether the intercepted API traffic includes an anomaly based on the comparison with the correlation data.”}
Kolluru is analogous art because each of Kopp, Thomas and Kolluru pertains to monitoring network traffic to detect anomalies. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kopp and Thomas to include Kolluru’s teaching of maintaining a context entry after a logout of the user. Doing so would provide “ an improved system for detecting security lapses in API system” (Kolluru, para. 0001).
Claim 16:
Regarding claim 16, the claim is directed to a non-transitory computer-readable medium comprising instructions that, when executed, cause a processor to perform the limitations recited by claim 6. Therefore, the rejection applied to claim 6 also applies to claim 16. Claim 16 is rejected under the same rationale as claim 6.
9. Claims 22 is rejected under 35 U.S.C. § 103 as being unpatentable over Kopp and Thomas as applied to claim 1, and in further view of Craig et al. (US 2019/0228151 A1), hereafter Craig.
Regarding claim 22, Kopp and Thomas teach the elements of claim 1 as outlined above.
However, Jensen does not teach in response to a partial match, updating the context entry to include a signature identifier and a signature offset associated with the matched content.
However, Thomas teaches in response to a partial match, updating the context entry to include a signature identifier and a signature offset associated with the matched content. {Thomas [Para. 0122] “Classifications for documents may be specified.” [Para. 0128] “The recognition model may be used to classify documents, for example, in the case of a feature vector model, by determining a feature vector for a given document or portion of a document, and matching it (e.g., an exact match or within a threshold distance) to one of the feature vectors. The classification of the exact match or sufficiently similar document may be assigned to the unclassified document.” [Para. 0129] “The classification for a document may be stored, for example, in the asset repository 805.”} Thomas classifies documents using a recognition model that utilizes partial matching logic.
Thomas is analogous art because each of Kopp and Thomas pertains to monitoring network traffic to detect malware. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kopp to include Thomas’ teaching of the limitations of claim 22, listed above. Doing so would “provide protection from a variety of threats to a variety of compute instances in a variety of locations and network configurations” (Thomas, para. 0054).
However, Thomas also does not teach updating the context entry to include a signature identifier and a signature offset associated with the matched content.
However, Craig teaches updating the context entry to include a signature identifier and a signature offset associated with the matched content. {Craig [Para. 0028] “If the contiguous string block is similar but not identical to an existing non-clean entry, the contiguous string block and the existing non-clean entry may be merged using wildcarding techniques. One or more characters may be replaced with a special wildcard character not otherwise found in strings, indicating that this character in the string matches any character when comparing strings.” [Para. 0029] “The wildcarded entry in the contiguous string block database may then be used to create a signature for a malware family. The wildcarded contiguous string block may then be added to the contiguous string block database, labelled by a block identifier.” [Para. 0030] “The block identifier, wildcarded entry, and a block order together form a signature for a malware family. The block order refers to the sequence position of a contiguous string block in a signature made up of multiple contiguous string blocks.”}
Craig is analogous art because each of Kopp, Thomas and Craig pertains to malware detection. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kopp and Thomas to include Craig’s teaching of the limitations of claim 22, listed above. Doing so “allow malware detection to be proactive in detecting new variants of malware…and not being prohibitively false positive prone” (Craig, para. 0022).
9. Claims 25 is rejected under 35 U.S.C. § 103 as being unpatentable over Kopp and Thomas as applied to claim 1, and in further view of Diao et al. (US 8,935,788 B1), hereafter Diao.
Regarding claim 25, Kopp and Thomas teach the limitations of claim 1 as stated.
Kopp further teaches wherein a subsequent inspection step comprises retrieving the context entry and determining that a combination of a previously stored content offset and a current inspection result satisfies a condition indicating malicious content. {Kopp [Para. 0030] “At 216, the detection server is configured to use the one or more detection rules determined at 214 to monitor future network traffic in the network to detect occurrence of certain user behavior activities of interest in the network. Continuing with the banking Trojan example above, when monitoring future network traffic, the detection server can determine that an intrusion of a banking Trojan has happened if the detection server 120 detects that user behavior activities A, B, C, D are included in the network traffic from and to network 110.”}
However, Kopp does not teach wherein the context entry comprises a record of at least one partial match including a content offset and a timestamp, and wherein a subsequent inspection step comprises retrieving the context entry and determining that a combination of a previously stored content offset and a current inspection result satisfies a condition indicating malicious content.
However, Thomas teaches wherein the context entry comprises a record of at least one partial match including a content offset and a timestamp. {Thomas {Thomas [Para. 0127] “For example, signatures and rules may be used to determine document matches.” [Para. 0128] “The recognition model may be used to classify documents, for example, in the case of a feature vector model, by determining a feature vector for a given document or portion of a document, and matching it (e.g., an exact match or within a threshold distance) to one of the feature vectors. The classification of the exact match or sufficiently similar document may be assigned to the unclassified document.” [Para. 0129] “The classification for a document may be stored, for example, in the asset repository 805.” [Para. 0187] “The event vectors 1410 may be received by the threat management facility 1412 and stored as an event stream 1414 in a data repository 1416, which may be any data store, memory, file or the like suitable for storing the event vectors 1410. The event vectors 1410 may be time stamped or otherwise labeled by the threat management facility 1412 to record chronology.”} Thomas classifies documents using a recognition model that utilizes partial matching logic.
Thomas is analogous art because each of Kopp and Thomas pertains to monitoring network traffic to detect malware. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kopp to include Thomas’ teaching of the limitations of claim 25, listed above. Doing so would “provide protection from a variety of threats to a variety of compute instances in a variety of locations and network configurations” (Thomas, para. 0054).
However, Thomas also does not teach wherein the context entry comprises a record of at least one partial match including a content offset and a timestamp, and wherein a subsequent inspection step comprises retrieving the context entry and determining that a combination of a previously stored content offset and a current inspection result satisfies a condition indicating malicious content.
However, Diao teaches wherein the context entry comprises a record of at least one partial match including a content offset and a timestamp. {Diao [Col. 4, line 19-38] “Virus detection system 200 includes server application 210 and client applications 250 and 260. Server application 210 is coupled to virus database 201. Generally, virus database 201 will store virus patterns extracted from all known virus samples. Virus patterns generally include one or more virus signatures. Each virus signature includes an offset parameter describing where the virus signature is found within a file, a size parameter describing the length of the virus signature in a number of bytes, and a virus signature parameter describing the string of digital bits that represents a certain part of the virus. When all known virus patterns are gathered together, they are stored within a virus pattern file ("VPF").”}
Diao is analogous art because each of Kopp, Thomas and Diao pertains to malware detection. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Kopp and Thomas to include Diao’s teaching of the limitations of claim 25, listed above. Doing so “improves the performance of a virus detection application by minimizing the number of slow pattern matching operations performed when scanning an unknown file for viruses” (Diao, Col.4 line 43-46).
Response to Arguments
10. Applicant's arguments, filed on December 17, 2025 , with respect to the rejection(s) of independent claims 1 and 11 have been fully considered, but are moot in view of the new grounds of rejection. The amended claims do not overcome the new ground of rejection made in view of newly found prior art references.
11. Applicant’s arguments, filed December 17, 2025, with respect to the rejection(s) of dependent claims 6 and 16 have been considered but are moot in view of the new grounds of rejection. The claims do not overcome the new ground of rejection made in view of newly found prior art references.
12. Applicant’s arguments, filed December 17, 2025, with respect to the rejection(s) of dependent claims 7-10, 17-21 and 23 have been considered but are moot in view of the new grounds of rejection. The claims do not overcome the new ground of rejection made in view of newly found prior art references.
13. Applicant’s arguments, filed December 17, 2025, with respect to the rejection(s) of dependent claim 22 have been considered but are moot in view of the new grounds of rejection. The claim does not overcome the new ground of rejection made in view of newly found prior art references.
Conclusion
14. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BIN QING ZHENG whose telephone number is (703)756-1535. The examiner can normally be reached on M-F 10:00 am -06:00 pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Philip J. Chea can be reached on 571-272-3951. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BIN QING ZHENG/
Examiner, Art Unit 2499
/PHILIP J CHEA/Supervisory Patent Examiner, Art Unit 2499