DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-20 are pending.
This Action is Non-Final.
Priority
Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Applicant has not complied with one or more conditions for receiving the benefit of an earlier filing date under 35 U.S.C. 120 as follows:
The later-filed application must be an application for a patent for an invention which is also disclosed in the prior application (the parent or original nonprovisional application or provisional application). The disclosure of the invention in the parent application and in the later-filed application must be sufficient to comply with the requirements of 35 U.S.C. 112(a) or the first paragraph of pre-AIA 35 U.S.C. 112, except for the best mode requirement. See Transco Products, Inc. v. Performance Contracting, Inc., 38 F.3d 551, 32 USPQ2d 1077 (Fed. Cir. 1994).
The disclosure of at least the prior-filed application, Application No. 17/389,863, fails to provide adequate support or enablement in the manner provided by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph for one or more claims of this application. The prior-filed application fails to provide adequate support for at least the claimed “wherein the real attacks comprise past or current red team assessments”. The prior-filed application generically discloses “red teams” but lacks the requisite detail to support the claim as presented. Furthermore, the prior-filed application fails to describe any “hypothetical” controls as require by the claims. As such claims 1-20 are not entitles to the benefit of the prior application and the earliest effective filing date for the claims as presented is the filing date of the present application: 08 December 2021.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 08 December 2021 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 7-14, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Gula et al. (US 20140007241), in view of Kennedy et al. (US 9325728), in view of DiGiambattista et al. (US 20170237778) and further in view of Powell et al. (US 20140245449).
As per claims 1 and 11, Gula et al. discloses a method and system for cybersecurity defensive strategy analysis and recommendations, comprising: a recommendation engine comprising a first plurality of programming instructions stored in a memory of, and operating on a processor of, a computing device, wherein the first plurality of programming instructions, when operating on the processor, cause the computing device (see paragraph [0078]) to:
generate a simulated model of the network by requesting packet capture data from networked devices, wherein a logical layout of the network is determined from the captured packets (see paragraph [0030] where the network topology is the logical layout of the network);
perform attack tests on the simulated model of the network by: determining software exploits from software residing in the network; analyzing the simulated model and exploitable software to determine vectors of attack by using a plurality data; and carrying out one or more vectors of attack in a simulation (see paragraphs [0047]-[0049] and [0060] where the system identifies exploitable client software and creates and uses an exploit attack chain, i.e. vector of attack, to simulate the attack/exploit) and generally determining a potential cybersecurity improvement recommendation (see paragraph [0077]).
While Gula et al. discloses various types of data that generally include synthetic, real, and attack data, there lacks an explicit disclosure the vectors of attack use real attack data comprises past or current red team assessments.
However, Kennedy et al. teaches the use of red team assessment data as part of simulating attacks on a network (see column 6 lines 9-31 and column 7 lines 6-13).
At a time before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to include the data of Kennedy et al. in the Gula et al. system.
Motivation, as recognized by one of ordinary skill in the art, to do so would have been to ensure the most relevant data is being used in the attack simulation.
The modified Gula et al. and Kennedy et al. system fails to explicitly disclose to compare the effectiveness of specific network controls based on the response to the attack tests; determine a potential cybersecurity improvement recommendation for the network based on ineffective network controls by: determine a successful cybersecurity improvement recommendation from the next simulation iteration; automatically implement the successful cybersecurity improvement recommendation on the network by sending configuration data to the effected devices; and produce data to describe a set of metrics from the simulations.
However, DiGiambattista et al. teaches a system for simulating attacks on a network to compare the effectiveness of specific network controls based on the response to the attack tests; determine a potential cybersecurity improvement recommendation for the network based on ineffective network controls by: determine a successful cybersecurity improvement recommendation from the next simulation iteration; automatically implement the successful cybersecurity improvement recommendation on the network by sending configuration data to the effected devices; and produce data to describe a set of metrics from the simulations (see paragraph [0117]-[0119] where the system automatically implements recommendations for improvement).
At a time before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to include the automatic implementation of cybersecurity improvements in the modified Gula et al. and Kennedy et al. system.
Motivation, as recognized by one of ordinary skill in the art, to do so would have been to ensure the improvement are implemented as quickly as possible thereby limiting any potential damage.
The modified Gula et al., Kennedy et al., and DiGiambattista et al. system generally discloses the use of hypotheticals in evaluating a network’s security, but fails to explicitly disclose analyzing new hypothetical controls by taking a known sensor or analytic and testing a known bad data set; analyzing new hypothetical controls based on cost and benefit factors; and testing and analyzing new hypothetical controls in a next simulation iteration; determine a successful cybersecurity improvement recommendation from the next simulation iteration; and produce data to describe a set of metrics from the simulations.
However, Powell et al. teaches analyzing new hypothetical controls by taking a known sensor or analytic and testing a known bad data set; analyzing new hypothetical controls based on cost and benefit factors; and testing and analyzing new hypothetical controls in a next simulation iteration; determine a successful cybersecurity improvement recommendation from the next simulation iteration; and produce data to describe a set of metrics from the simulations (see paragraphs [0054]-[0059]).
At a time before the effective filing date of the invention, it would have been obvious to include the hypothetical analysis of Powell et al. in the modified Gula et al., Kennedy et al., and DiGiambattista et al. system.
Motivation to do so would have been to determine the severity of a potential attack (see Powell et al. paragraph [0059]).
As per claims 2 and 12, the modified Gula et al., Kennedy et al., DiGiambattista et al., and Powell et al. system discloses endpoint agents and network packet capturing devices (see Gula et al. paragraphs [0030]-[0033]).
As per claims 3 and 13, the modified Gula et al., Kennedy et al., DiGiambattista et al., and Powell et al. system discloses the recommendation engine explicitly requests packet capture data from the endpoint agents and the network packet capturing devices, and wherein the captured data packets comprise contextual direction information (see Gula et al. paragraphs [0030]-[0033] where all packets have directional information, i.e. source and destination addresses).
As per claims 4 and 14, the modified Gula et al., Kennedy et al., DiGiambattista et al., and Powell et al. system discloses he captured packet data is transferred or stored on a local or cloud- based storage medium and void of processing information (see Gula et al. paragraphs [0030]-[0033]).
As per claims 7 and 17, the modified Gula et al., Kennedy et al., DiGiambattista et al., and Powell et al. system discloses the recommendations are delivered as a service to one or more entities (see DiGiambattista et al. paragraphs [0117]-[0122]).
As per claims 8 and 18, the modified Gula et al., Kennedy et al., DiGiambattista et al., and Powell et al. system discloses the recommendations are determined internally within an organization (see DiGiambattista et al. paragraph [0120]).
As per claims 9 and 19, the modified Gula et al., Kennedy et al., DiGiambattista et al., and Powell et al. system discloses the set of metrics comprises the following properties: observability, detectability, control effectiveness, compliance effectiveness, and response/mitigation ability (see DiGiambattista et al. paragraphs [0117]-[0122] and Powell et al. paragraphs [0054]-[0059]).
As per claims 10 and 20, the modified Gula et al., Kennedy et al., DiGiambattista et al., and Powell et al. system discloses the set of metrics are also used to determine the vectors of attack (see Gula et al. paragraphs [0047]-[0049] and [0060]; DiGiambattista et al. paragraphs [0117]-[0122]; and Powell et al. paragraphs [0054]-[0059]).
Claims 5, 6, 15, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over the modified Gula et al., Kennedy et al., DiGiambattista et al., and Powell et al. system as applied to claims 1 and 11 above, and further in view of Shakarian et al. (US 20220078203).
As per claims 5, 6, 15, and 15, the modified Gula et al., Kennedy et al., DiGiambattista et al., and Powell et al. system generally determines exploitable software, but fails to explicitly disclose generating software exploitability scores that are determined at least by one or more of the following properties: address space layout randomization, data execution prevention, stack hardening, compilation options, or any combination thereof, wherein the exploitability scores are compared to deep web, dark web, and internet data obtained via public data collection and scans.
However, Shakarian et al. teaches generating software exploitability scores that are determined at least by one or more of the following properties: address space layout randomization, data execution prevention, stack hardening, compilation options, or any combination thereof, wherein the exploitability scores are compared to deep web, dark web, and internet data obtained via public data collection and scans (see paragraphs [0028]-[0032] and [0082]-[0083]).
At a time before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to include the exploitability scores of Shakarian et al. in the modified Gula et al., Kennedy et al., DiGiambattista et al., and Powell et al. system.
Motivation, as recognized by one of ordinary skill in the art, to do so would have been to ensure as many different sources are used for determining exploitability of software, thereby making the system more robust.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: the remaining references put forth on the PTO-892 form are directed to simulating attacks on a network.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL J PYZOCHA whose telephone number is (571)272-3875. The examiner can normally be reached Monday-Thursday 7:30am-5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hadi Armouche can be reached at (571) 270-3618. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Michael Pyzocha/ Primary Examiner, Art Unit 2409