Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This is a reply to the application filed on 03/14/2024 with preliminary amendment filed on 03/14/2024, in which, claim(s) 1-10, 14-17, 19-23 and 27 are pending. Claim(s) 1, 14 and 27 are independent.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 03/14/2024, has been reviewed. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the examiner is considering the information disclosure statement.
Drawings
The drawings filed on 03/14/2024 are accepted by The Examiner.
Specification
Applicant is reminded of the proper language and format for an abstract of the disclosure.
The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details.
The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided.
The abstract of the disclosure is objected to because it contains legal phraseology (i.e., “A cyber security system, the cyber security system comprising a processing circuitry configured to: obtain: (a) an attack-vector scenario, the attack-vector scenario comprising a sequence of cyber tactics…”). Correction is required. See MPEP § 608.01(b).
Appropriate correction is required.
Claim Objections
Claims 1-2, 7, 10, 14-15, 20, 23 and 27 are objected to because of the following informalities:
Claim 1 (Lines 5-6), claim 14 (Lines 4-5) and claim 27 (Lines 6-7) recite “each of the cyber tactics being associated with one or more respective cyber techniques which are possible manifestations of the corresponding cyber tactic in the context of the attack-vector scenario”. The term “possible” creates ambiguity. The term “context” is lack of antecedent basis. Please clarify.
Claim 1 (Line 7), claim 14 (Line 6) and claim 27 (Line 8) recite “at least one sequence of machine language instructions that can execute on one or more processors”. It is not clear if the processor can or cannot execute instructions. Please clarify.
Claim 2 and claim 15 recite “if any” in the end of limitation. It is not clear what is the metes and bounds of the limitation “if any”. Please clarify.
Claim 7 and claim 20 recite “a plurality of event types that can occur on the one or more entities”. With the term “can”, we don’t know if it really occurs or not. Please clarify.
Claim 10 and claim 23 recite “wherein events associated with the next step cyber tactic can occur”. With the term “can”, we don’t know if it really occurs or not. Please clarify.
Appropriate correction is required.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper time-wise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO internet Web site contains terminal disclaimer forms which may be used. Please visit http://www.uspto.gov/forms/. The filing date of the application will determine what form should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 1-10, 14-17, 19-23 and 27 are provisionally rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over:
Claims 1-19 of copending Application 18/691,925.
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
Copending Application (18/691,925)
Current Application (18/691,920)
Claim 1. A cyber security system, the cyber security system comprising a processing circuitry configured to:
obtain: (a) an attack-vector scenario, the attack-vector scenario comprising a sequence of cyber tactics, each of the cyber tactics being associated with one or more respective cyber techniques which are possible manifestations of the corresponding cyber tactic in the context of the attack-vector scenario, each cyber technique is associated with a corresponding event type of a plurality of event types that can occur on one or more entities of an organizational network, wherein occurrence of an actual event of the respective event type indicates implementation of the respective cyber technique, and (b) information about actual events that occurred on the one or more entities of the organizational network, wherein each of the actual events is associated with a respective actual event type;
identify, based on the information, the cyber techniques that occurred on the organizational network by matching the actual event types with the event types associated with the cyber techniques, giving rise to implemented cyber techniques; and
alert a user of the cyber security system of a potential cyber-attack upon determining that alert requirements being met, the alert requirements including that: (a) each of the cyber tactics forming the attack vector scenario, is associated with at least one of the implemented cyber techniques, and (b) each pair of implemented cyber techniques associated with a pair of subsequent cyber tactics is associated with a respective common property.
Claim 1. A cyber security system, the cyber security system comprising a processing circuitry configured to:
obtain: (a) an attack-vector scenario, the attack-vector scenario comprising a sequence of cyber tactics, each of the cyber tactics being associated with one or more respective cyber techniques which are possible manifestations of the corresponding cyber tactic in the context of the attack-vector scenario, at least one cyber technique is associated with at least one sequence of machine language instructions that can execute on one or more processors of at least one entity of a plurality of entities of an organizational network, wherein execution of the at least one sequence of machine language instructions indicates implementation of the respective cyber technique, and (b) information about actual sequences of machine language instructions that executed on the one or more processors;
identify, based on the information, the cyber techniques that occurred on the organizational network by matching the actual sequences of machine language instructions with the at least one sequence of machine language instructions associated with the cyber techniques, giving rise to implemented cyber techniques; and
alert a user of the cyber security system of a potential cyber-attack upon determining that each of the cyber tactics forming the attack vector scenario, is associated with at least one of the implemented cyber techniques.
Claim 7. The cyber security system of claim 1, wherein: (a) at least one cyber technique is associated with a corresponding event type of a plurality of event types that can occur on the one or more entities of the organizational network, wherein occurrence of an actual event of the respective event type indicates implementation of the respective cyber technique, (b) the information further includes actual events that occurred on the one or more entities of the organizational network, wherein each of the actual events is associated with a respective actual event type, and (c) the identification of the implemented cyber techniques is further based on matching the actual event types with the event types associated with the cyber techniques, giving rise to implemented cyber techniques.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claims 1-10, 14-17, 19-23 and 27 are rejected under 35 U.S.C. 103 as being unpatentable over Ben Ezra et al. (US 2018/0069876 A1) in view of Sebastien Meriot (US 2020/0007575 A1).
Regarding Claims 1, 14, and 27, Ben Ezra discloses
obtain: (a) an attack-vector scenario, the attack-vector scenario comprising a sequence of cyber tactics, each of the cyber tactics being associated with one or more respective cyber techniques which are possible manifestations of the corresponding cyber tactic in the context of the attack-vector scenario ([0009], “a DDoS burst attack is a sequence of high traffic volumes communicated in bursts. A sequence of actions would include intermittent bursts of attack traffic and then pauses. As another example, a sequence of actions can begin with information gathering, continue with lateral movement, and end in data exfiltration”) , at least one cyber technique is associated with at least one sequence of events that can execute on one or more processors of at least one entity of a plurality of entities of an organizational network, wherein execution of the at least one sequence of events indicates implementation of the respective cyber technique, and (b) information about actual sequences of events that executed on the one or more processors ([0053], “at S320, every pair of event sequences in the list of sequences are compared to each other to identify patterns having similar behavior. In an embodiment, S320 includes listing, for each sequence, its fixed and step features (Ffixed and Fstep); comparing each fixed feature (Ffixed) of one sequence to Fstep of another sequence; identifying patterns of similar steps (based on the Ffixed and Fstep)”);
identify, based on the information, the cyber techniques that occurred on the organizational network by matching the actual sequences of events with the at least one sequence of events associated with the cyber techniques, giving rise to implemented cyber techniques ([0059-0061], “At S520, the new event is matched to any event sequence created during the learning process in order to update any such sequence. In an embodiment, S520 can be performed using the sequencing process described in FIG. 4”, “At S530, any updated sequences of events, new sequences of events, or a combination thereof, created in response to the matching performed at S520 is compared to the identified attack patterns”); and
alert a user of the cyber security system of a potential cyber-attack upon determining that each of the cyber tactics forming the attack vector scenario, is associated with at least one of the implemented cyber techniques ([0062], “At S550, it is checked if the risk score is above a predefined threshold. If so, execution continues with S560, at which an alert is generated”),
Ben Ezra does not explicitly teach but Meriot teaches
sequences of events are sequences of machine language instructions ([0028], “locating a predetermined machine language instruction sequence in the malware”);
Ben Ezra and Meriot are analogous art as they are in the same field of endeavor of information security. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Meriot with the disclosure of Ben Ezra. The motivation/suggestion would have been for defending an infrastructure against a distributed denial of service (DDoS) attack (Meriot, [0030]).
Regarding Claims 2, and 15, the combined teaching of Ben Ezra and Meriot teaches
wherein alerting the user of the potential cyber-attack is upon further determining that a causality connection exists between one or more of the implemented cyber techniques associated with each given cyber tactic of the cyber tactics and one or more of the implemented cyber techniques associated with a subsequent cyber tactic subsequent to the given cyber tactic, if any (Ben Ezra, [0010], “analyze connections of events across different devices”, [0030], “an attack prediction system 150 is also communicatively connected to the network 120 and configured to perform the various disclosed embodiments for predictive cyber-attack detection”).
Regarding Claims 3, and 16, the combined teaching of Ben Ezra and Meriot teaches
wherein the information about the actual sequences of machine language instructions is obtained by a trapping mechanism, capable of retrieving machine language instructions from an instruction unit, storing fetched machine learning instruction for at least one processor of the processors (Meriot, [0098], “the C&C data collector 130 locates a predetermined machine language instruction sequence in the malware. The predetermined machine language instruction sequence may comprise one or more instructions, at least one of these instructions being a rarely used instruction”).
Regarding Claims 4, and 17, the combined teaching of Ben Ezra and Meriot teaches
wherein: (a) at least one machine language instruction of the sequence of machine language instructions comprises one or more first opcodes, and (b) the identification of the implemented cyber techniques is based on matching the opcodes comprised in the actual sequences of machine language instructions with second opcodes comprised in the at least one sequence of machine language instructions (Meriot, [0092], “the ciphering key is automatically recuperated by searching for a predetermined operation code sequence that is indicative of a ciphering routine used in the malware. This opcode sequence may contain other operation codes besides PUSH and MOV. As a non-limiting example, FIG. 7 is an illustration of a routine used by the malware MIRAI to encrypt character chains. A On FIG. 7, a routine 500 defines a value 510 labelled “table_key”. The table_key 510 is placed in variables k1, k2, k3 and k4 using SHIFT operation codes. A SHIFT 24 operation code 520 actually shifts the table_key 510 by 24 bits into variable k4. While SHIFT operations by 8 or 16 bits are not uncommon, the SHIFT 24 operation code 520 is an infrequently (or rarely) used operation code and provides a clue to the reverse engineering process of the location of the ciphering key. Otherwise stated, the SHIFT 24 operation code 520 is part of a signature of the malware MIRAI. Previous experience acquired from reverse engineering applied to other malwares may be put to use to identify specific operation codes as potential markers for corresponding malwares”).
Regarding Claim 5, the combined teaching of Ben Ezra and Meriot teaches
wherein the at least one sequence of machine language instructions is at least a partial translation of code realizing the corresponding cyber technique into machine langue instructions (Meriot, [0092], “This opcode sequence may contain other operation codes besides PUSH and MOV. As a non-limiting example, FIG. 7 is an illustration of a routine used by the malware MIRAI to encrypt character chains. A On FIG. 7, a routine 500 defines a value 510 labelled “table_key”. The table_key 510 is placed in variables k1, k2, k3 and k4 using SHIFT operation codes. A SHIFT 24 operation code 520 actually shifts the table_key 510 by 24 bits into variable k4. While SHIFT operations by 8 or 16 bits are not uncommon, the SHIFT 24 operation code 520 is an infrequently (or rarely) used operation code and provides a clue to the reverse engineering process of the location of the ciphering key. Otherwise stated, the SHIFT 24 operation code 520 is part of a signature of the malware MIRAI. Previous experience acquired from reverse engineering applied to other malwares may be put to use to identify specific operation codes as potential markers for corresponding malwares”).
Regarding Claims 6, and 19, the combined teaching of Ben Ezra and Meriot teaches
wherein the at least one sequence of machine language instructions includes one or more suspicious machine langue instructions, being machine learning instructions being a translation of suspicious code (Meriot, [0098], “the SHIFT 24 operation code 520 is part of a signature of the malware MIRAI. Previous experience acquired from reverse engineering applied to other malwares may be put to use to identify specific operation codes as potential markers for corresponding malwares”).
Regarding Claims 7, and 20, the combined teaching of Ben Ezra and Meriot teaches
wherein: (a) at least one cyber technique is associated with a corresponding event type of a plurality of event types that can occur on the one or more entities of the organizational network, wherein occurrence of an actual event of the respective event type indicates implementation of the respective cyber technique, (b) the information further includes actual events that occurred on the one or more entities of the organizational network, wherein each of the actual events is associated with a respective actual event type, and (c) the identification of the implemented cyber techniques is further based on matching the actual event types with the event types associated with the cyber techniques, giving rise to implemented cyber techniques (Ben Ezra, [0009], “a DDoS burst attack is a sequence of high traffic volumes communicated in bursts. A sequence of actions would include intermittent bursts of attack traffic and then pauses. As another example, a sequence of actions can begin with information gathering, continue with lateral movement, and end in data exfiltration”, [0053], “at S320, every pair of event sequences in the list of sequences are compared to each other to identify patterns having similar behavior. In an embodiment, S320 includes listing, for each sequence, its fixed and step features (Ffixed and Fstep); comparing each fixed feature (Ffixed) of one sequence to Fstep of another sequence; identifying patterns of similar steps (based on the Ffixed and Fstep)”).
Regarding Claims 8, and 21, the combined teaching of Ben Ezra and Meriot teaches
wherein the information is obtained periodically or continuously and wherein the identify and the alert are performed periodically or continuously while maintaining previously identified implemented cyber techniques (Ben Ezra, [0015], “periodically receiving new security events”, [0062], “At S550, it is checked if the risk score is above a predefined threshold. If so, execution continues with S560, at which an alert is generated” periodically or continuously).
Regarding Claims 9, and 22, the combined teaching of Ben Ezra and Meriot teaches
predict, based on: (a) the attack-vector scenario, (b) the implemented cyber techniques, and (c) the previously identified implemented cyber techniques, a next step cyber tactic of the cyber tactics; and perform a prevention action to prevent the next step cyber tactic (Ben Ezra, [0030], “an attack prediction system 150 is also communicatively connected to the network 120 and configured to perform the various disclosed embodiments for predictive cyber-attack detection. Specifically, the attack prediction system 150 is configured to analyze events to generate sequences of events”, [0062]).
Regarding Claims 10, and 23, the combined teaching of Ben Ezra and Meriot teaches
wherein the prevention action is one or more of: (a) report the next step cyber tactic to the user of the cyber security system, (b) simulate the next step cyber tactic, or (c) implement one or more honeypots within one or more entities of the organizational network wherein events associated with the next step cyber tactic can occur (Ben Ezra, [0062], “At S550, it is checked if the risk score is above a predefined threshold. If so, execution continues with S560, at which an alert is generated; otherwise, execution terminates. Alternatively, or collectively, S560 may include activating one more mitigation actions”).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHENG-FENG HUANG whose telephone number is (571)272-6186. The examiner can normally be reached Monday-Friday: 9 am - 5 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Eleni A Shiferaw can be reached at (571) 272-3867. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHENG-FENG HUANG/Primary Examiner, Art Unit 2497