DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Detailed Action
Office Action is in response to the instant Application 18/798,386 filed on 8/8/2024. Claims 1-17 are pending. This Office Action is Non-Final.
Information Disclosure Statement
The information disclosure statement (IDS), submitted on 8/8/2024 and 5/19/2025, is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1, 9 and 10 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 6 and 7 of U.S. Patent No. 12,095,806. Although the claims at issue are not identical, they are not patentably distinct from each other because:
Instant Application
U.S. Patent No. 12,095,806
1. A method for validating cybersecurity issues utilizing runtime data, comprising: inspecting a workload deployed in a computing environment for a cybersecurity object, using static analysis; deploying a sensor on the workload, the sensor configured to collect runtime data from the workload;
detecting in the collected runtime data an indicator of the cybersecurity object; and initiating in the computing environment a first mitigation action with a first priority in response to detecting the indicator of the cybersecurity object.
9. A non-transitory computer-readable medium storing a set of instructions for validating cybersecurity issues utilizing runtime data, the set of instructions comprising: one or more instructions that, when executed by one or more processors of a device, cause the device to: inspect a workload deployed in a computing environment for a cybersecurity object, using static analysis; deploy a sensor on the workload, the sensor configured to collect runtime data from the workload; detect in the collected runtime data an indicator of the cybersecurity object;
and initiate in the computing environment a first mitigation action with a first priority in response to detecting the indicator of the cybersecurity object.
10. A system for validating cybersecurity issues utilizing runtime data comprising: a processing circuitry; a memory, the memory containing instructions that, when executed by the processing circuitry, configure the system to: inspect a workload deployed in a computing environment for a cybersecurity object, using static analysis; deploy a sensor on the workload, the sensor configured to collect runtime data from the workload; detect in the collected runtime data an indicator of the cybersecurity object;
and initiate in the computing environment a first mitigation action with a first priority in response to detecting the indicator of the cybersecurity object.
1. A method for validating cybersecurity issues utilizing runtime data, comprising: inspecting a workload deployed in a computing environment for a cybersecurity issue using at least a static analysis technique; deploying a sensor on the workload, the sensor configured to collect runtime data from the workload; determining reachability properties of the workload; generating a network path between an external network and the workload; initiating active inspection of the network path to determine if the workload is a reachable workload; initiating a first mitigation action with a first priority in the computing environment in response to validating the cybersecurity issue from the collected runtime data; and initiating a second mitigation action with a second priority, which is lower than the first priority, in response to failing to validate the cybersecurity issue from the collected runtime data and determining that the workload is not a reachable workload.
6. A non-transitory computer-readable medium storing a set of instructions for validating cybersecurity issues utilizing runtime data, the set of instructions comprising: one or more instructions that, when executed by one or more processors of a device, cause the device to: inspect a workload deployed in a computing environment for a cybersecurity issue using at least a static analysis technique; deploy a sensor on the workload, the sensor configured to collect runtime data from the workload; determine reachability properties of the workload; generate a network path between an external network and the workload; initiate active inspection of the network path to determine if the workload is a reachable workload; initiate a first mitigation action with a first priority in the computing environment in response to validating the cybersecurity issue from the collected runtime data; and initiate a second mitigation action with a second priority, which is lower than the first priority, in response to failing to validate the cybersecurity issue from the collected runtime data and determining that the workload is not a reachable workload.
A system for validating cybersecurity issues utilizing runtime data comprising: a processing circuitry; a memory, the memory containing instructions that, when executed by the processing circuitry, configure the system to: inspect a workload deployed in a computing environment for a cybersecurity issue using at least a static analysis technique; deploy a sensor on the workload, the sensor configured to collect runtime data from the workload; determine reachability properties of the workload; generate a network path between an external network and the workload;
initiate active inspection of the network path to determine if the workload is a reachable workload; initiate a first mitigation action with a first priority in the computing environment in response to validating the cybersecurity issue from the collected runtime data; and initiate a second mitigation action with a second priority, which is lower than the first priority, in response to failing to validate the cybersecurity issue from the collected runtime data and determining that the workload is not a reachable workload.
All the limitations of claims 1, 9 and 10 are anticipated by claims 1, 6 and 7 of U.S. Patent No. 12,095,806, as shown by the above table.
Regarding claims 2-8 and 11-17; claims 2-8 and 11-17 are also rejected under Double Patenting for similar reasons respectively and are dependent on claims 1 and 110 and therefore inherit the rejection from issues of the independent claims.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1-4, 7-13, 16 and 17 is/are rejected under 3/5 U.S.C. 103 as being unpatentable over Guo et al. (US 12,483,576) in view of Griffin et al. (US 2018/0276377).
As per claim 1, Guo teaches a method for validating cybersecurity issues utilizing runtime data, comprising: inspecting a workload deployed in a computing environment for a cybersecurity object, using static analysis; deploying a sensor on the workload, the sensor configured to collect runtime data from the workload; detecting in the collected runtime data an indicator of the cybersecurity object (Guo, Col. 75 Lines 9-19 recites “As shown, an attack path determination module 510 within data platform 12 may be configured to access static workload data 506 and/or runtime workload data 508 and identify one or more attack paths within compute environment 502 based on an analysis of static workload data 506 and/or runtime workload data 508. Attack path determination module 510 may interface with user interface resources 22, which may be used to generate one or more user interface views that may be presented, for example, by way of a display associated with computing device 24. Example user interface views are described herein.”).
But fails to teach initiating in the computing environment a first mitigation action with a first priority in response to detecting the indicator of the cybersecurity object.
However, in an analogous art Griffin teaches initiating in the computing environment a first mitigation action with a first priority in response to detecting the indicator of the cybersecurity object (Griffin, Paragraph 0041 recites “For example, block 303 shows that the selected security mitigation workflow includes a first action to start quickly and a second action to start after business hours. The workflow may be selected due to the fact that the user is not in the middle of high priority task that may not be interrupted and that more invasive security mitigation actions may prevent the user from carrying out regular business due to the lack of a substitute device. The first security mitigation action is associated with a medium likelihood of success, the second security mitigation action is associated with a high likelihood of success. However, the second security mitigation action is also associated with greater device limitations, and it is selected as a second action to be performed if the first action fails and to be executed at a later time associated with less user inconvenience.”).
It would have been obvious to a person of ordinary skill in the art, before the earliest effective filing date to use Griffin’s security mitigation action selection based on device usage with Guo’s Compute Resource Risk Mitigation By A Data Platform because it would be an advantage to take care of process data based on priority needs of a system.
As per claim 2, Guo in combination with Griffin teaches the method of claim 1, Griffin further teaches initiating a second mitigation action with a second priority, which is lower than the first priority, in response to failing to validate the cybersecurity object from the collected runtime data (Griffin, Paragraph 0041 recites “For example, block 303 shows that the selected security mitigation workflow includes a first action to start quickly and a second action to start after business hours. The workflow may be selected due to the fact that the user is not in the middle of high priority task that may not be interrupted and that more invasive security mitigation actions may prevent the user from carrying out regular business due to the lack of a substitute device. The first security mitigation action is associated with a medium likelihood of success, the second security mitigation action is associated with a high likelihood of success. However, the second security mitigation action is also associated with greater device limitations, and it is selected as a second action to be performed if the first action fails and to be executed at a later time associated with less user inconvenience.”).
It would have been obvious to a person of ordinary skill in the art, before the earliest effective filing date to use Griffin’s security mitigation action selection based on device usage with Guo’s Compute Resource Risk Mitigation By A Data Platform because it would be an advantage to take care of process data based on priority needs of a system.
As per claim 3, Guo in combination with Griffin teaches the method of claim 1, Gou further teaches generating an inspectable disk based on a disk of the workload; and inspecting the inspectable disk for the cybersecurity object, wherein the cybersecurity object indicates a cybersecurity issue (Guo, Col. 81 Lines 17-33 recites “Method 800 may further include, at operation 806, determining, based on one or more characteristics of the set of one or more attack paths, a risk score specific to the particular risk artifact. The risk score may indicate a level of risk that the particular risk artifact could be exploited to access the one or more datasets and may be represented by any suitable metric, such as a discrete value (e.g., a percentage, a level, a range, a probability value, etc.). To illustrate, a risk score having a higher value within a given range, e.g. between 1 to 10 (e.g., greater than about 5, greater than about 7, and/or greater than about 9), may indicate a higher potential risk associated with the exploitation of the one or more datasets. Alternatively, a risk score having a lower value (e.g., less than about 4, less than about 2, and/or less than about 1) may indicate a lower potential risk associated with the exploitation of the one or more datasets.”).
As per claim 4, Guo in combination with Griffin teaches the method of claim 3, Gou further teaches initiating the first mitigation action based on the indicated cybersecurity issue (Guo, Col 82 Lines 38-50 recites “Method 800 may further include, at operation 808, performing, based on the risk score specific to the particular risk artifact, a risk mitigation operation associated with the particular risk artifact. The risk mitigation operation may include any operation that mitigates and/or facilitates mitigation of risk that the particular risk artifact may be used by an attacker to access the one or more datasets. For example, the risk mitigation operation may include causing a display device (e.g., associated with computing device 24) to display the risk score specific to the particular risk artifact. This may indicate one or more levels of risk associated with the particular risk score to a user (e.g., for remediating the particular risk artifact)”
As per claim 7, Guo in combination with Griffin teaches the method of claim 1, Guo further teaches configuring the sensor to collect: an artifact, an event, a datalink layer communication, a permission, a list of applications loaded in memory, a list of libraries loaded in memory, and a combination thereof (Guo, Col. 81 Lines 17-33 recites “Method 800 may further include, at operation 806, determining, based on one or more characteristics of the set of one or more attack paths, a risk score specific to the particular risk artifact. The risk score may indicate a level of risk that the particular risk artifact could be exploited to access the one or more datasets and may be represented by any suitable metric, such as a discrete value (e.g., a percentage, a level, a range, a probability value, etc.). To illustrate, a risk score having a higher value within a given range, e.g. between 1 to 10 (e.g., greater than about 5, greater than about 7, and/or greater than about 9), may indicate a higher potential risk associated with the exploitation of the one or more datasets. Alternatively, a risk score having a lower value (e.g., less than about 4, less than about 2, and/or less than about 1) may indicate a lower potential risk associated with the exploitation of the one or more datasets.”).
As per claim 8, Guo in combination with Griffin teaches the method of claim 1, Guo further teaches initiating the first mitigation action including any one of: generating an alert, revoking a permission, revoking access to a workload, revoking access from a workload, sandboxing a workload, generating an alert, installing a software patch, uninstalling a software application, updating a priority of an alert, and any combination thereof (Guo, Col. 25 Lines 49-57 recites “Alert generator 158 is a microservice that may be responsible for generating alerts. Alert generator 158 may examine observations (e.g., produced by GBM 154) in aggregate, deduplicate them, and score them. Alerts may be generated for observations with a score exceeding a threshold. Alert generator 158 may also compute (or retrieve, as applicable) data that a customer (e.g., user A or user B) might need when reviewing the alert. Examples of events that can be detected by data platform 12 (and alerted on by alert generator 158) include,”).
Regarding claims 9 and 10, claims 9 and 10 are directed to a non-transitory readable medium and a system associated with the method of claim 1. Claims 9 and 10 are of similar scope to claim 1, and are therefore rejected under similar rationale.
Regarding claim 11, claim 11 is directed to a similar system associated with the method of claim 2 respectively. Claim 11 is similar in scope to claim 2, respectively, and are therefore rejected under similar rationale.
Regarding claim 12, claim 12 is directed to a similar system associated with the method of claim 3 respectively. Claim 12 is similar in scope to claim 3, respectively, and are therefore rejected under similar rationale.
Regarding claim 13, claim 13 is directed to a similar system associated with the method of claim 4 respectively. Claim 13 is similar in scope to claim 4, respectively, and are therefore rejected under similar rationale.
Regarding claim 16, claim 16 is directed to a similar system associated with the method of claim 7 respectively. Claim 16 is similar in scope to claim 7, respectively, and are therefore rejected under similar rationale.
Regarding claim 17, claim 17 is directed to a similar system associated with the method of claim 8 respectively. Claim 17 is similar in scope to claim 8, respectively, and are therefore rejected under similar rationale.
Claim(s) 5, 6, 14 and 15 is/are rejected under 3/5 U.S.C. 103 as being unpatentable over Guo et al. (US 12,483,576) and Griffin et al. (US 2018/0276377) and in further view of Meyer et al. (US 2018/0343316).
As per claim 5, Guo in combination with Griffin teaches the method of claim 1, but fails to teach determining reachability properties of the workload; generating a network path between an external network and the workload; and initiating active inspection of the network path to determine if the workload is a reachable workload.
However, in an analogous art Meyer teaches determining reachability properties of the workload; generating a network path between an external network and the workload; and initiating active inspection of the network path to determine if the workload is a reachable workload (Meyer, Paragraph 0031 recites “The improved implementation 300 also includes a service proxy gateway 302. The service proxy gateway 302 provides each of the workloads 130 access to the target services 112 via the private network 110 irrespective of the IP protocol reachability of the network to which the workloads 130 are connected. Returning to the implementation 200 shown in FIG. 2, the workload.sub.1-2 130B is shown configured with public network access to the target service(s) 112, and the workload.sub.2-1 130C is shown as having no external access.”).
It would have been obvious to a person of ordinary skill in the art, before the earliest effective filing date to use Meyer’s Cloud Workload Proxy As Link-Local Service with Guo’s Compute Resource Risk Mitigation By A Data Platform because it would be an advantage to know the extent of how a device is connected in a network in the event of a vulnerability.
As per claim 6, Guo in combination with Griffin and Meyer teaches the method of claim 5, Griffin further teaches initiating the first mitigation action with a third priority, higher than the first priority, in response to determining that the workload is a reachable workload (Griffin, Paragraph 0041 recites “For example, block 303 shows that the selected security mitigation workflow includes a first action to start quickly and a second action to start after business hours. The workflow may be selected due to the fact that the user is not in the middle of high priority task that may not be interrupted and that more invasive security mitigation actions may prevent the user from carrying out regular business due to the lack of a substitute device. The first security mitigation action is associated with a medium likelihood of success, the second security mitigation action is associated with a high likelihood of success. However, the second security mitigation action is also associated with greater device limitations, and it is selected as a second action to be performed if the first action fails and to be executed at a later time associated with less user inconvenience.”).
It would have been obvious to a person of ordinary skill in the art, before the earliest effective filing date to use Griffin’s security mitigation action selection based on device usage with Guo’s Compute Resource Risk Mitigation By A Data Platform because it would be an advantage to take care of process data based on priority needs of a system.
Regarding claim 14, claim 14 is directed to a similar system associated with the method of claim 5 respectively. Claim 14 is similar in scope to claim 5, respectively, and are therefore rejected under similar rationale.
Regarding claim 15, claim 15 is directed to a similar system associated with the method of claim 6 respectively. Claim 15 is similar in scope to claim 6, respectively, and are therefore rejected under similar rationale.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RODERICK TOLENTINO whose telephone number is (571)272-2661. The examiner can normally be reached Mon- Fri 8am-4pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Luu Pham can be reached on 571-270-5002. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
RODERICK . TOLENTINO
Examiner
Art Unit 2439
/RODERICK TOLENTINO/Primary Examiner, Art Unit 2439