DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 07 January 2026 has been entered.
Claims 1-4, 6-16, and 18-21 are pending.
This Action is Non-Final.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 6, 7, 12, 13, and 15-19 are rejected under 35 U.S.C. 103 as being unpatentable over Nucci et al. (US 8418249) in view of Chesla et al. (US 20040250124) and further in view of Rahman et al. (US 20250088521).
As per claims 1, 6, 7, and 16, Nucci et al. discloses a computer-implemented method comprising:
receiving, by a computing system, a set of security signatures for analysis (see column 14 lines 57-62 the signature library);
identifying, by the computing system, computing traffic data within which the set of security signatures have appeared; extracting, by the computing system and from the computing traffic data, a set of features describing the computing traffic data (see column 14 line 63 through column 3 where the system uses the signature library and traffic to create a behavior model, i.e. a set of features);
generating, by the computing system, a new security signature based at least in part on a correlation between the set of features and the set of security signatures (see column 15 lines 19-31).
Nucci et al. further discloses a system comprising: a processor; and a memory having stored thereon instructions that are executable by the processor to cause the system to perform operations (see column 23 lines 28-55) comprising :intercepting network traffic; generating a plurality of signatures from the network traffic (see column 14 lines 57-62 where traffic must be intercepted, i.e. analyzed, and used to generate the signature library for it to be used);
recording features of the network traffic associated with each of the plurality of signatures; and generating a new signature based on the recorded features of the network traffic (see column 14 line 63 through column 15 line 31)
detecting, by the computing system, a security threat underlying the new security signature: and blocking, by the computing system, computing traffic data associated with the security threat (see column 7 lines 23-39 where the rules are used in firewalls and IDS systems which will detect and block traffic from a threat).
Nucci et al. fails to explicitly disclose, but Chesla et al. teaches grouping similar security signatures of the set of security signatures and classifying the set of features with one or more groups of similar security signatures, wherein grouping similar security signatures comprises: identifying candidate signatures for grouping based on the candidate signatures sharing a signature type; and determining a similarity threshold for grouping the candidate signatures based on the signature type, wherein determining the similarity threshold for grouping the candidate signatures based on the signature type comprises determining the similarity threshold for grouping the candidate signatures based at least in part on a complexity of an attack associated with the signature type, wherein an increased complexity correlates with a higher similarity threshold where the signatures are used to filter/block attack traffic (see paragraphs [0264]-[0290] where the signatures are grouped by their types based on similarities and complexity of attack).
At a time before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to include the signature grouping of Chesla et al. in the Nucci et al. system.
Motivation, as recognized by one of ordinary skill in the art, to do so would have been to allow the system to only test against particular groups thereby saving resources.
While the modified Nucci et al. and Chesla et al. system discloses grouping of signatures, there lacks an explicit teaching of classifying, using at least one of a statistical or machine learning analysis, the set of features to one or more groups of similar security signatures.
However, Rahman et al. teaches receiving, by a computing system, a set of security signatures for analysis (see paragraph [0024] numeral 304); identifying, by the computing system, computing traffic data within which the set of security signatures have appeared (see paragraphs [0023]-[0024] where the generated signatures are used to detect subsequent incidents); extracting, by the computing system and from the computing traffic data, a set of features describing the computing traffic data (see paragraph [0025]); grouping, by the computing system, similar security signatures within of the set of security signatures (see paragraphs [0021]-[0024] the signature clusters); classifying, by the computing system using at least one of a statistical or machine learning analysis, the set of features with to one or more groups of similar security signatures; generating, by the computing system, a new security signature based at least in part on the one or more groups of similar security signatures (see paragraph [0025]).
At a time before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to include the signature grouping and classifying of Rahman et al. in the modified Nucci et al. and Chesla et al. system.
Motivation, as recognized by one of ordinary skill in the art, to do so would have been to refine the signatures to enable detection of morphing attacks.
As per claim 2, the modified Nucci et al., Chesla et al., and Rahman et al. system discloses the new security signature applies to a set of computing traffic scenarios that do not all cause a security system to produce any one of the set of security signatures (see Nucci et al. column 15 lines 4-18).
As per claim 3, the modified Nucci et al., Chesla et al., and Rahman et al. system as applied fails to explicitly disclose the new security signature, when applied by a security system, causes the security system to detect a security threat underlying at least two of the security signatures within the set of security signatures.
However Chesla et al. and Rahman et al. further teach signatures causes the security system to detect a security threat underlying at least two of the security signatures within the set of security signatures (see Chesla et al. paragraph [0281] and Rahman et al. paragraph [0025]).
At a time before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to detect a security threat underlying at least two of the security signatures within the set of security signatures.
Motivation, as recognized by one of ordinary skill in the art, to do so would have been to allow for detection of more complex attacks.
As per claim 4, the modified Nucci et al., Chesla et al., and Rahman et al. system discloses adding, by the computing system, the new security signature to a computing security system configured to detect computing threats (see Nucci et al. column 20 line 58 through column 21 line 60).
As per claim 12, the modified Nucci et al., Chesla et al., and Rahman et al. system discloses receiving the set of security signatures for analysis comprises identifying the set of security signatures from a security system analyzing the computing traffic data (see Nucci et al. column 14 lines 57-62).
As per claim 13, the modified Nucci et al., Chesla et al., and Rahman et al. system discloses receiving the set of security signatures for analysis comprises generating the set of security signatures from at least one malicious program sample (see Nucci et al. column 10 lines 10-37).
As per claim 15, the modified Nucci et al., Chesla et al., and Rahman et al. system discloses the computing traffic data comprises: data from observed computing traffic; and data from simulated computing traffic (see Nucci et al. column 14 line 63 through column 15 line 3).
As per claim 17, the modified Nucci et al., Chesla et al., and Rahman et al. system discloses deploying the new signature to a security system for comparison against future network traffic (see Nucci et al. column 15 lines 19-31 where adding the signature to the signature library deploys the new signature).
As per claim 18, the modified Nucci et al., Chesla et al., and Rahman et al. system discloses the new signature, when applied by a security system, causes the security system to detect a type of attack from which signatures within at least one of the one or more groupings of the plurality of signatures were derived (see Nucci et al. column 14 line 63 through column 15 line 31 and Chesla et al. paragraphs [0264]-[0290]).
As per claim 19, the modified Nucci et al., Chesla et al., and Rahman et al. system discloses the security system detects the type of attack using the new signature when the type of attack does not include any of the plurality of signatures (see Nucci et al. column 14 line 63 through column 15 line 31).
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over the modified Nucci et al., Chesla et al., and Rahman et al. system as applied to claim 1 above, in view of Baracaldo-Angel et al. (US 20200019821).
As per claim 8, the modified Nucci et al., Chesla et al., and Rahman et al. system discloses analyzing the computing traffic data for an attack; and excluding from correlation portions of the computing traffic data (see Nucci et al. column 15 lines 4-18), but fails to explicitly disclose the detection of a false-flag attack.
However, Baracaldo-Angel et al. teaches detecting and protecting against false-flag attacks (see paragraphs [0118]-[0120] where an attack to invoke a false positive is considered a false-flag attack).
At a time before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to detect and protect against false-flag attacks in the modified Nucci et al. and Chesla et al. system.
Motivation, as recognized by one of ordinary skill in the art, to do so would have been to strengthen the system to protect against additional attack types.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over the modified Nucci et al., Chesla et al., Rahman et al., and Baracaldo-Angel et al. system as applied to claim 8 above, and further in view of Johnson et al. (US 20180332060).
As per claim 9, the modified Nucci et al., Chesla et al., Rahman et al., and Baracaldo-Angel et al. system discloses detecting false-flag attacks, but fails to explicitly disclose determining that an operation within the computing traffic data comprises at least one command-and-control callback to an untrusted target and that a related operation comprises at least one command-and-control callback to a trusted target.
However, Johnson et al. teaches determining that an operation within the computing traffic data comprises at least one command-and-control callback to an untrusted target and that a related operation comprises at least one command-and-control callback to a trusted target (see paragraphs [0026]-[0030] where the system determines the addresses of the command-and-control callback to determine maliciousness).
At a time before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to check command-and-control callbacks in the modified Nucci et al., Chesla et al., Rahman et al., and Baracaldo-Angel et al. system.
Motivation, as recognized by one of ordinary skill in the art, to do so would have been to strengthen the system to protect against additional attack types.
Claims 10 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over the modified Nucci et al., Chesla et al., and Rahman et al. system as applied to claim 1 above, in view of Chen Kaidi (US 20210377288).
As per claims 10 and 11, the modified Nucci et al., Chesla et al., and Rahman et al. system extracting features, but fails to explicitly disclose extracting a first subset of the set of features; identifying a data source correlating the first subset of the set of features with a second subset of the set of features; and extracting the second subset of the set of features from the data source, wherein identifying the data source correlating the first subset of the set of features with the second subset of the set of features is in response to determining that the first subset of the set of features fails to reach a 41 predetermined threshold for correlating with the set of security signatures to generate the new security signature.
However, Chen Kaidi teaches extracting a first subset of the set of features; identifying a data source correlating the first subset of the set of features with a second subset of the set of features; and extracting the second subset of the set of features from the data source, wherein identifying the data source correlating the first subset of the set of features with the second subset of the set of features is in response to determining that the first subset of the set of features fails to reach a 41predetermined threshold for correlating with the set of security signatures to generate the new security signature. (see paragraphs [0058]-[0069]).
At a time before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to include the feature extraction and correlation of Chen Kaidi in the modified Nucci et al., Chesla et al., and Rahman et al. system.
Motivation to do so would have been to identify malicious network traffic from other sources, such as other sources that utilize the same or similar computing attack (see Chen Kaidi paragraph [0069]).
Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over the modified Nucci et al., Chesla et al., and Rahman et al. system as applied to claim 13 above, in view of Chen et al. (US 20230138013).
As per claim 14, the modified Nucci et al., Chesla et al., and Rahman et al. system discloses malicious program samples, but fails to explicitly disclose the sample is customize to attack a predefined target.
However, Chen et al. teaches a malware sample is customized to attack a predefined target (see paragraph [0029]).
At a time before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to include customized malware samples in the modified Nucci et al., Chesla et al., and Rahman et al. system.
Motivation to do so would have been to allow for dynamic behavior based malware detection (see Chen et al. paragraph [0029]).
Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over the modified Nucci et al., Chesla et al., and Rahman et al. system as applied to claim 15 above, in view of Veteikis et al. (US 20130347103).
As per claim 21, the modified Nucci et al., Chesla et al., and Rahman et al. system discloses classifying features, but fails to disclose generating the simulated computing traffic data using a fuzzing process to produce variants of the computing traffic data.
However, Veteikis et al. teaches generating the simulated computing traffic data using a fuzzing process to produce variants of the computing traffic data (see paragraph [0281]).
At a time before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to include the fuzzing of traffic for simulating attacks in the modified Nucci et al., Chesla et al., and Rahman et al. system.
Motivation to do so would have been to test ranges with the system (see Veteikis et al. paragraph [0281]).
Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Nucci et al. in view of Ball, SR. (US 20240086539) and further in view of Tagore (US 9614773).
As per claim 20, Nucci et al. discloses a computer-implemented method comprising: scanning, by a computing system, system activity for any of a plurality of malicious signatures; detecting, by the computing system, at least one malicious signature of the plurality of malicious signatures within the system activity; analyzing, by the computing system, the system activity for one or more properties that appear in conjunction with the at least one malicious signature but that do not appear as often in absence of the at least one malicious signature; generating, by the computing system, a new malicious signature based on the one or more properties that appear in conjunction with the at least one malicious signature (see columns 14 line 57 through column 15 line 31 as applied above).
Nucci et al. discloses deploying the signature, but fails to explicitly disclose allowing, by the computing system, additional system activity originating from a source of the at least one malicious signature and testing the new malicious signature to determine a detection rate of malicious activity within the additional system activity by the new malicious signature; and deploying, by the computing system, the new malicious signature for use by a security system based at least in part on determining that the detection rate by the new malicious signature exceeds a predetermined threshold.
However, Ball, SR. teaches allowing, by the computing system, additional system activity originating from a source of the at least one malicious signature and testing the new malicious signature to determine a detection rate of malicious activity within the additional system activity by the new malicious signature; and deploying, by the computing system, the new malicious signature for use by a security system based at least in part on determining that the detection rate by the new malicious signature exceeds a predetermined threshold (see paragraph [0100]).
At a time before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to test the new signatures of Nucci et al. prior to deployment.
Motivation, as recognized by one of ordinary skill in the art, to do so would have been to ensure it properly detects the attack.
The modified Nucci et al. and Ball, SR system generally tracking system activity, but fail to explicitly disclose the system activity is of a plurality of sources, where the detection and analysis is of a source of the plurality.
However, Tagore teaches scanning, by a computing system, system activity of a plurality of sources for any of a plurality of malicious signatures; detecting, by the computing system, at least one malicious signature of the plurality of malicious signatures within the system activity of a source of the plurality of sources; analyzing, by the computing system, the system activity of the source for one or more properties that appear in conjunction with the at least one malicious signature (see column 6 lines 33-61 and column 8 lines 4-32).
At a time before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to include the teaches of Tagore in the modified Nucci et al. and Ball, SR system.
Motivation to do so would have been to detect anomalies/attacks from specific applications (see Tagore column 6 lines 33-61 and column 8 lines 4-32).
Response to Arguments
Applicant's arguments filed 24 September 2025 have been fully considered but they are not persuasive. Applicant argues The Examiner failed to address each argument present and that Nucci in view of Chesla fails to disclose the claims as amended.
With respect to Applicant’s arguments regarding the alleged failure to address each argument, the Examiner respectfully disagrees and maintains that the Final Rejection addressed this argument. More specifically, the Examiner's response addresses how the combination is proper and that the grouping of signatures is known and obvious since the Chesla reference was only relied upon for the teaching of the known concept of grouping of signatures and does not require the details of various layers as Applicant contends. Furthermore, nothing in Nucci prevents the use of the claimed grouping as Nucci already uses behavioral grouping and would, as such, be open to modification to include other types of groupings. Therefore, the Examiner has addressed each and every argument and the Action was complete and proper.
Applicant’s remaining arguments are moot in view of the new grounds of rejection as put forth above.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: the remaining references put forth on the PTO-892 form are directed towards malware detection in traffic.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL J PYZOCHA whose telephone number is (571)272-3875. The examiner can normally be reached Monday-Thursday 7:30am-5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hadi Armouche can be reached at (571) 270-3618. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Michael Pyzocha/ Primary Examiner, Art Unit 2409