DETAILED ACTION
This communication is responsive to the applicant’s arguments for application No. 18778533 filed on 01/20/2026. Claims 1,9,10,18,19 have been currently amended. Claims 1-20 are pending examination.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to arguments
Rejections under 35 U.S.C. 103
Claims 1, 10, and 19:
Applicant’s arguments with respect to claim(s) 2 and 5-9 have been considered but are moot because the new ground of rejection does not rely on the reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. The deficiencies, as stated by the applicant, in Boia, have been cured by the introduction of a new reference Kaplan et al. (US 20200145450 A1).
Applicant argues that Boia fails to disclose "determining, by a SAVER assignment optimization model and from a set of SAVERs, a first SAVER to execute the first software application evaluation task based at least on the first set of evaluation task requirements and SAVER attribute data for the set of SAVERS including a quantity of previous findings of the first SAVER, a quality of the previous findings, and a severity level associated with the previous findings”.
Kaplan discloses a management/control system that selects a particular researcher from a pool of researchers for a particular vulnerability testing project using project specific requirement and stored researchers’ attributes. Further discloses, a distributed plurality of researchers, assessing researchers’ reputation and skills, accepting a subset having positive reputation and sufficient skills and then assigning a particular computer vulnerability research project to a particular researcher from among the subset. Accordingly, Boia combined with Kaplan is sufficient to teaches the limitations of claim 1, 10 and 19 and are hence rejected.
Applicant also argues that no "model" is disclosed in Boia.
Kaplan discloses a control logic 224 that may be implemented using one or more computer programs, other software elements, other digital logic, and that logic performs the assignment function by assessing reputation and skills, accepting a subset of the researchers who have a positive reputation and sufficient skills and assigning a particular computer vulnerability research project to a particular researcher from among the subset of researchers. The reference need not use the word “model” where it discloses software logic that performs the claimed determination. Here, the disclosed control logic is the SAVER assignment optimization model.
Claims 3, 12 and 20
Applicant argues that there is no mention made of generating a SAVER profile to include an "output" and the only mention of a "profile" in Boia does not pertain to a testing environment at all.
Examiner disagrees. Boia records an affinity data structure for one or more different types of tests, uses that affinity to decide which environment gets which future task, and allows a test environment to have multiple affinities and to be reconfigured based on health monitoring, overloaded/underloaded state and availability. Kaplan provides even stronger support for this profile limitation because it discloses creating database records that identify researchers and include providing representations of particular skill sets of researchers and creating/storing accomplishment levels tied to identifying a particular number of vulnerabilities along with a submission quality score and CVSS score, which are all SAVER-profile contents including attribute, skill, performance and historical vulnerability-result data. Hence, applicants’ arguments are not persuasive and the amendments have necessitated new grounds of rejection.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-5,7, 10-14,16,19 and 20 are rejected under 35 U.S.C.103 as being unpatentable over Boia et al. (US 20180007077 A1), hereinafter referred to as Boia in view of Kaplan et al. (US 20200145450 A1), hereinafter referred to as Kaplan
As per claim 1, Boia discloses a method for providing software application vulnerability evaluation resource (SAVER) optimization, the method comprising:
receiving, by communications hardware, a software application; (The testing service 230 can receive inputs indicating targets to be tested from one or more target discovery services 240, Boia, para [0032])
determining, by SAVER management circuitry, a first software application evaluation task for execution with respect to the software application, wherein the first software application evaluation task is associated with a first set of evaluation task requirements; (The task triage component 330 can insert the triaged testing tasks 332 in priority /affinity queues 334. For example, in one implementation, the priority /affinity queues 334 may include a high priority queue, a low priority queue, and a very high priority queue. This prioritizing can include applying priority rules to the inputs 320, Boia, para [0042] - [0043]).
However, Boia does not explicitly disclose the limitations:
determining, by a SAVER assignment optimization model and from a set of SAVERs, a first SAVER to execute the first software application evaluation task based at least on the first set of evaluation task requirements and SAVER attribute data for the set of SAVERS including a quantity of previous findings of the first SAVER, a quality of the previous findings, and a severity level associated with the previous findings; and
providing, by the communications hardware, an indication of the first SAVER to a computing device
Kaplan discloses:
determining, by a SAVER assignment optimization model and from a set of SAVERs, a first SAVER to execute the first software application evaluation task based at least on the first set of evaluation task requirements and SAVER attribute data for the set of SAVERS including a quantity of previous findings of the first SAVER, a quality of the previous findings, and a severity level associated with the previous findings; and (Distributed plurality of researchers to participate in one or more computer vulnerability research projects directed to identifying computer vulnerabilities of one or more networks and/or computers that are owned or operated by a third party; assessing reputation and skills of one or more of the researchers, and accepting a subset of the researchers who have a positive reputation and sufficient skills to perform the investigations of the computer vulnerabilities, Kaplan, para [0022]. Here, the distributed plurality of research computers is the set of SAVERS and the particular research computer is the first SAVER. The assessment and assignment engine that assesses researchers, stores data and assigns a particular project is the assignment optimization model. The number of vulnerabilities is the quantity of previous findings and the quality score/ verification pf submission quality is the interpreted as the quality of previous findings. The CVSS score is the severity level associated with previous findings).
providing, by the communications hardware, an indication of the first SAVER to a computing device (Block 106 may comprise providing a summary of a record of a particular computer vulnerability research project among those that were defined at block 101, and an access location that is associated with the service provider, Kaplan, para [0034]. Here, an assignment may include providing a summary of the project record and an access location and includes providing a network/domain address and credentials to the researcher. This indicates the system communicating the selected evaluator and enabling the evaluator’s computing device to participate to the assigned task).
A person of ordinary skill in the art before the effective filing date of the claimed invention would have combined Boia and Kaplan by receiving and distributing vulnerability testing tasks (Boia) and distributed discovery of vulnerabilities in applications (Kaplan). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Boia and Kaplan in order to effectively perform network penetration testing, attack testing, identification of security vulnerabilities (See Kaplan, para [0002])
As per claim 2, Boia and Kaplan discloses the method of claim 1, further comprising:
Furthermore, Boia discloses:
determining, by the SAVER assignment optimization model, a first set of evaluation task constraints associated with the first software application evaluation task (The work scheduler 340 may maintain affinities 342 for one or more test environments 350, which can be data indicating that particular test environments 350 are configured to advantageously conduct particular types of tests, Boia, para [0043]).
As per claim 3, Boia and Kaplan discloses the method of claim 2, further comprising:
Furthermore, Boia discloses:
generating, by a SAVER analysis model, a SAVER profile associated with the first SAVER, wherein the SAVER profile comprises one or more of attribute data, strength data, weakness data, skillset data, ability data, productivity metric data, performance evaluation data, colleague feedback data, training record data, skill assessment result data, historical vulnerability detection results generated based on previously assigned software application evaluation tasks, identification data, workload data, availability data, or evaluation toolset data associated with the first SAVER (Certain testing environments/clients may have an affinity for certain types of tests recorded in the system, which can affect which environments/clients are assigned to conduct which tests, Boia, para [0013]. This indicates how a device has performed previously indicating historical vulnerability detection results).
As per claim 4, Boia and Kaplan discloses the method of claim 3, further comprising:
Furthermore, Boia discloses:
determining, by the SAVER assignment optimization model, the first SAVER to execute the first software application evaluation task based on the SAVER profile associated with the first SAVER and the first set of evaluation task constraints associated with the first software application evaluation task (Each task 332 may also include data specifying the type of task 332, such as the types of tests to be run (which can be defined in test definitions 338, which can be accessed by the work scheduler 340 and/or the test environments 350), the nature of the endpoint being tested (such as whether the endpoint is an online endpoint that is publicly available, an online endpoint that is not publicly available such as an endpoint on a private network, an application that is configured to be run within a specified framework (such as on a specified operating system), etc.). Such data indicating the type of task 332 may be used to allow a work scheduler 340 to assign the task to an appropriate test environment 350 with an affinity for con conducting a type of test requested by the task, Boia, para [0043]).
As per claim 5, Boia and Kaplan disclose the method of claim 1, further comprising:
Furthermore, Kaplan discloses:
receiving, by the communications hardware, first vulnerability detection results generated in response to execution of the first software application evaluation task; (Validating a report of the candidate security vulnerability of the particular network under test that is received from the particular researcher; determining and providing an award to the particular researcher in response to successfully validating the report of the candidate security vulnerability of the particular network under test that is received from the particular researcher, Kaplan, Abstract. Here, the system validates a report of the candidate security vulnerability that is received from the particular researcher and receives vulnerability related reports tied to the project)
storing, by the SAVER management circuitry, the first vulnerability detection results in a storage device; (The reports 205 may be received at vulnerability database 250 to provide baseline vulnerability data or to assist in defining the computer vulnerability projects that may be offered to researchers, Kaplan, para [0049]. Here, database 250 stores metadata or data useful to the system, receives reports from the automatic scanning system and stores records of previously reported vulnerabilities)
updating, by a SAVER analysis model, a SAVER profile associated with the first SAVER based on the first vulnerability detection results; and (Data representing accomplishment levels for particular researcher computers 202 based upon total points that are earned by or awarded to the researchers, Kaplan, para [0097]. Here, creating and storing data represents accomplishment levels for particular researchers based on total points earned, where achievements may be associated with identifying a particular number of vulnerabilities, segmenting researchers and updating researcher records with tags)
updating, by the SAVER management circuitry, the SAVER assignment optimization model based on the first vulnerability detection results (Control logic 224 may implement a feedback loop in relation to automatic scanning system 204 by which the control logic provides updates to configuration data or other input to the automatic scanning system, based upon validated vulnerability reports from researchers, to improve the ability of the automatic scanning system to detect other vulnerabilities in the future in relation to the networks under test 208, 228 or the computers under test 226, 230, Kaplan, para [0049])
A person of ordinary skill in the art before the effective filing date of the claimed invention would have combined Boia and Kaplan by receiving and distributing vulnerability testing tasks (Boia) and distributed discovery of vulnerabilities in applications (Kaplan). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Boia and Kaplan in order to effectively perform network penetration testing, attack testing, identification of security vulnerabilities (See Kaplan, para [0002])
As per claim 7, Boia and Kaplan discloses the method of claim 1, further comprising:
Furthermore, Boia discloses:
determining, by the SAVER assignment optimization model and based on the first set of evaluation task requirements, a second SAVER to execute the first software application evaluation task in conjunction with the first SAVER (The test environments 350 can operate in parallel so that different test environments 350 can be conducting different tests at the same time. Indeed, a single test environment 350 may conduct multiple tests for multiple different tasks at the same time, Boia, para [0045]).
As per claim 10, Boia and Kaplan discloses an apparatus for providing software application vulnerability evaluation resource (SAVER) optimization, wherein the apparatus comprises:
communications hardware configured to receive a software application; (The testing service 230 can receive inputs indicating targets to be tested from one or more target discovery services 240, Boia, para [0032])
SAVER management circuitry configured to determine a first software application evaluation task for execution with respect to the software application, wherein the first software application evaluation task is associated with a first set of evaluation task requirements; and (The task triage component 330 can insert the triaged testing tasks 332 in priority /affinity queues 334. For example, in one implementation, the priority /affinity queues 334 may include a high priority queue, a low priority queue, and a very high priority queue. This prioritizing can include applying priority rules to the inputs 320, Boia, para [0042] - [0043])
Furthermore, Kaplan discloses:
a SAVER assignment optimization model configured to determine, from a set of SAVERs, a first SAVER to execute the first software application evaluation task based at least on the first set of evaluation task requirements and SAVER attribute data for the set of SAVERS including a quantity of previous findings of the first SAVER, a quality of the previous findings, and a severity level associated with the previous findings, (Distributed plurality of researchers to participate in one or more computer vulnerability research projects directed to identifying computer vulnerabilities of one or more networks and/or computers that are owned or operated by a third party; assessing reputation and skills of one or more of the researchers, and accepting a subset of the researchers who have a positive reputation and sufficient skills to perform the investigations of the computer vulnerabilities, Kaplan, para [0022]. Here, the distributed plurality of research computers is the set of SAVERS and the particular research computer is the first SAVER. The assessment and assignment engine that assesses researchers, stores data and assigns a particular project is the assignment optimization model. The number of vulnerabilities is the quantity of previous findings and the quality score/ verification pf submission quality is the interpreted as the quality of previous findings. The CVSS score is the severity level associated with previous findings).
wherein the communications hardware is configured to provide an indication of the first SAVER to a computing device (Block 106 may comprise providing a summary of a record of a particular computer vulnerability research project among those that were defined at block 101, and an access location that is associated with the service provider, Kaplan, para [0034]. Here, an assignment may include providing a summary of the project record and an access location and includes providing a network/domain address and credentials to the researcher. This indicates the system communicating the selected evaluator and enabling the evaluator’s computing device to participate to the assigned task)
A person of ordinary skill in the art before the effective filing date of the claimed invention would have combined Boia and Kaplan by receiving and distributing vulnerability testing tasks (Boia) and distributed discovery of vulnerabilities in applications (Kaplan). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Boia and Kaplan in order to effectively perform network penetration testing, attack testing, identification of security vulnerabilities (See Kaplan, para [0002])
As per claim 11, Boia and Kaplan disclose the apparatus of claim 10, wherein
Furthermore, Boia discloses:
the SAVER assignment optimization model is configured to determine a first set of evaluation task constraints associated with the first software application evaluation task (The work scheduler 340 may maintain affinities 342 for one or more test environments 350, which can be data indicating that particular test environments 350 are configured to advantageously conduct particular types of tests, Boia, para [0043]).
As per claim 12, Boia and Kaplan discloses the apparatus of claim 11, wherein the apparatus further comprises:
Furthermore, Boia discloses:
a SAVER analysis model configured to generate a SAVER profile associated with the first SAVER, wherein the SAVER profile comprises one or more of attribute data, strength data, weakness data, skillset data, ability data, productivity metric data, performance evaluation data, colleague feedback data, training record data, skill assessment result data, historical vulnerability detection results generated based on previously assigned software application evaluation tasks, identification data, workload data, availability data, or evaluation toolset data associated with the first SAVER (Certain testing environments/clients may have an affinity for certain types of tests recorded in the system, which can affect which environments/clients are assigned to conduct which tests, Boia, para [0013]. This indicates how a device has performed previously indicating historical vulnerability detection results)
As per claim 13, Boia and Kaplan disclose the apparatus of claim 12, wherein
Furthermore, Boia discloses:
the SAVER assignment optimization model is configured to determine the first SAVER to execute the first software application evaluation task based on the SAVER profile associated with the first SAVER and the first set of evaluation task constraints associated with the first software application evaluation task (Each task 332 may also include data specifying the type of task 332, such as the types of tests to be run (which can be defined in test definitions 338, which can be accessed by the work scheduler 340 and/or the test environments 350), the nature of the endpoint being tested (such as whether the endpoint is an online endpoint that is publicly available, an online endpoint that is not publicly available such as an endpoint on a private network, an application that is configured to be run within a specified framework (such as on a specified operating system), etc.). Such data indicating the type of task 332 may be used to allow a work scheduler 340 to assign the task to an appropriate test environment 350 with an affinity for con conducting a type of test requested by the task, Boia, para [0043])
As per claim 14, Boia and Kaplan disclose the apparatus of claim 10, wherein:
Furthermore, Kaplan discloses:
the communications hardware is configured to receive first vulnerability detection results generated in response to execution of the first software application evaluation task, and the SAVER management circuitry is configured to store the first vulnerability detection results in a storage device; (The reports 205 may be received at vulnerability database 250 to provide baseline vulnerability data or to assist in defining the computer vulnerability projects that may be offered to researchers, Kaplan, para [0049]. Here, database 250 stores metadata or data useful to the system, receives reports from the automatic scanning system and stores records of previously reported vulnerabilities)
wherein the apparatus further comprises a SAVER analysis model configured to update a SAVER profile associated with the first SAVER based on the first vulnerability detection results; and (Data representing accomplishment levels for particular researcher computers 202 based upon total points that are earned by or awarded to the researchers, Kaplan, para [0097]. Here, creating and storing data represents accomplishment levels for particular researchers based on total points earned, where achievements may be associated with identifying a particular number of vulnerabilities, segmenting researchers and updating researcher records with tags)
the SAVER management circuitry is configured to update the SAVER assignment optimization model based on the first vulnerability detection results (Control logic 224 may implement a feedback loop in relation to automatic scanning system 204 by which the control logic provides updates to configuration data or other input to the automatic scanning system, based upon validated vulnerability reports from researchers, to improve the ability of the automatic scanning system to detect other vulnerabilities in the future in relation to the networks under test 208, 228 or the computers under test 226, 230, Kaplan, para [0049]).
A person of ordinary skill in the art before the effective filing date of the claimed invention would have combined Boia and Kaplan by receiving and distributing vulnerability testing tasks (Boia) and distributed discovery of vulnerabilities in applications (Kaplan). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Boia and Kaplan in order to effectively perform network penetration testing, attack testing, identification of security vulnerabilities (See Kaplan, para [0002]).
As per claim 16, Boia and Kaplan disclose the apparatus of claim 10, wherein
Furthermore, Boia discloses:
the SAVER assignment optimization model is configured to determine, based on the first set of evaluation task requirements, a second SAVER to execute the first software application evaluation task in conjunction with the first SAVER (The test environments 350 can operate in parallel SO that different test environments 350 can be conducting different tests at the same time. Indeed, a single test environment 350 may conduct multiple tests for multiple different tasks at the same time, Boia, para [0045]).
As per claim 19, Boia discloses a computer program product for providing software application vulnerability evaluation resource (SAVER) optimization, the computer program product comprising at least one non-transitory computer-readable storage medium storing software instructions that, when executed, cause an apparatus to:
receive, by communications hardware, a software application; (The testing service 230 can receive inputs indicating targets to be tested from one or more target discovery services 240, Boia, para [0032]).
determine, by SAVER management circuitry, a first software application evaluation task for execution with respect to the software application, wherein the first software application evaluation task is associated with a first set of evaluation task requirements; (The task triage component 330 can insert the triaged testing tasks 332 in priority /affinity queues 334. For example, in one implementation, the priority /affinity queues 334 may include a high priority queue, a low priority queue, and a very high priority queue. This prioritizing can include applying priority rules to the inputs 320, Boia, para [0042] - [0043]).
However, Boia does not explicitly disclose the limitations:
determine, by a SAVER assignment optimization model and from a set of SAVERs, a first SAVER to execute the first software application evaluation task based at least on the first set of evaluation task requirements and SAVER attribute data for the set of SAVERS including a quantity of previous findings of the first SAVER, a quality of the previous findings, and a severity level associated with the previous findings; and
provide, by the communications hardware, an indication of the first SAVER to a computing device
Kaplan discloses:
determine, by a SAVER assignment optimization model and from a set of SAVERs, a first SAVER to execute the first software application evaluation task based at least on the first set of evaluation task requirements and SAVER attribute data for the set of SAVERS including a quantity of previous findings of the first SAVER, a quality of the previous findings, and a severity level associated with the previous findings; and (Distributed plurality of researchers to participate in one or more computer vulnerability research projects directed to identifying computer vulnerabilities of one or more networks and/or computers that are owned or operated by a third party; assessing reputation and skills of one or more of the researchers, and accepting a subset of the researchers who have a positive reputation and sufficient skills to perform the investigations of the computer vulnerabilities, Kaplan, para [0022]. Here, the distributed plurality of research computers is the set of SAVERS and the particular research computer is the first SAVER. The assessment and assignment engine that assesses researchers, stores data and assigns a particular project is the assignment optimization model. The number of vulnerabilities is the quantity of previous findings and the quality score/ verification pf submission quality is the interpreted as the quality of previous findings. The CVSS score is the severity level associated with previous findings).
provide, by the communications hardware, an indication of the first SAVER to a computing device (Block 106 may comprise providing a summary of a record of a particular computer vulnerability research project among those that were defined at block 101, and an access location that is associated with the service provider, Kaplan, para [0034]. Here, an assignment may include providing a summary of the project record and an access location and includes providing a network/domain address and credentials to the researcher. This indicates the system communicating the selected evaluator and enabling the evaluator’s computing device to participate to the assigned task).
A person of ordinary skill in the art before the effective filing date of the claimed invention would have combined Boia and Kaplan by receiving and distributing vulnerability testing tasks (Boia) and distributed discovery of vulnerabilities in applications (Kaplan). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Boia and Kaplan in order to effectively perform network penetration testing, attack testing, identification of security vulnerabilities (See Kaplan, para [0002])
As per claim 20, Boia and Kaplan discloses the computer program product of claim 19, wherein the software instructions cause the apparatus to:
Furthermore, Boia discloses:
determine, by the SAVER assignment optimization model, a first set of evaluation task constraints associated with the first software application evaluation task; (The work scheduler 340 may maintain affinities 342 for one or more test environments 350, which can be data indicating that particular test environments 350 are configured to advantageously conduct particular types of tests, Boia, para [0043])
generate, by a SAVER analysis model, a SAVER profile associated with the first SAVER, wherein the SAVER profile comprises one or more of attribute data, strength data, weakness data, skillset data, ability data, productivity metric data, performance evaluation data, colleague feedback data, training record data, skill assessment result data, historical vulnerability detection results generated based on previously assigned software application evaluation tasks, identification data, workload data, availability data, or evaluation toolset data associated with the first SAVER; and (Certain testing environments/clients may have an affinity for certain types of tests recorded in the system, which can affect which environments/clients are assigned to conduct which tests, Boia, para [0013]. This indicates how a device has performed previously indicating historical vulnerability detection results).
determine, by the SAVER assignment optimization model, the first SAVER to execute the first software application evaluation task based on the SAVER profile associated with the first SAVER and the first set of evaluation task constraints associated with the first software application evaluation task (Each task 332 may also include data specifying the type of task 332, such as the types of tests to be run (which can be defined in test definitions 338, which can be accessed by the work scheduler 340 and/or the test environments 350), the nature of the endpoint being tested (such as whether the endpoint is an online endpoint that is publicly available, an online endpoint that is not publicly available such as an endpoint on a private network, an application that is configured to be run within a specified framework (such as on a specified operating system), etc.). Such data indicating the type of task 332 may be used to allow a work scheduler 340 to assign the task to an appropriate test environment 350 with an affinity for conducting a type of test requested by the task, Boia, para [0043]).
Claims 6,8,9, 15, 17, 18 are rejected under 35 U.S.C. 103 as being unpatentable over Boia et al. (US 20180007077 A1), hereinafter referred to as Boia, in view of Kaplan et al. (US 20200145450 A1), hereinafter referred to as Kaplan in further view of Ewaida et al. (US 11736507 B2), hereinafter referred to as Ewaida.
As per claim 6, Boia and Kaplan disclose the method of claim 5, further comprising:
However, Boia in view of Kaplan does not disclose:
determining, by the SAVER management circuitry and based on the first vulnerability detection results, a second software application evaluation task associated with the software application; and
determining, by the SAVER assignment optimization model, a second SAVER to execute the second software application evaluation task
Ewaida discloses:
determining, by the SAVER management circuitry and based on the first vulnerability detection results, a second software application evaluation task associated with the software application; and (For each of the ports identified as open by the port scanning task, a combination of the address and the open port are used to generate a vulnerability scanning task which gets pushed onto vulnerability scanning queue 250 for processing, Ewaida, col 8, lines 49-60. This indicates that after a first scanning result, the system generates/pushes a vulnerability scanning task. The outcome of the first triggers a follow-on task).
determining, by the SAVER assignment optimization model, a second SAVER to execute the second software application evaluation task (As vulnerability scanning dispatcher 260 receives each request for a vulnerability scanning task, vulnerability scanning dispatcher 260 pops a vulnerability scanning task off of vulnerability scanning queue 250 and sends the vulnerability scanning task to the assigned one of the one or more vulnerability scanning services 160 for completion, Ewaida, col 10, lines 45-53. Here the dispatcher assigns the newly generated vulnerability scanning task to one of the vulnerabilities scanning services where this corresponds to selecting a resource to perform that follow-on task).
A person of ordinary skill in the art before the effective filing date of the claimed invention would have combined Boia and Kaplan by receiving and distributing vulnerability testing tasks (Boia) and distributed discovery of vulnerabilities in applications (Kaplan) with techniques for analyzing network vulnerabilities (Ewaida). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Boia and Kaplan with Ewaida in order to help safeguard against port scanning types of attacks (See Ewaida, Abstract)
As per claim 8, Boia and Kaplan disclose the method of claim 1, further comprising:
However, Boia in view of Kaplan does not explicitly disclose:
determining, by the SAVER assignment optimization model, a first evaluation toolset with which to execute the first software application evaluation task
Ewaida discloses:
determining, by the SAVER assignment optimization model, a first evaluation toolset with which to execute the first software application evaluation task (Under the supervision of supervisor 210, port scanning dispatcher 240 manages the assignment of port scanning tasks to the one or more port scanning services 150, Ewaida, col 10, lines 17-20).
A person of ordinary skill in the art before the effective filing date of the claimed invention would have combined Boia and Kaplan by receiving and distributing vulnerability testing tasks (Boia) and distributed discovery of vulnerabilities in applications (Kaplan) with techniques for analyzing network vulnerabilities (Ewaida). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Boia and Kaplan with Ewaida in order to help safeguard against port scanning types of attacks (See Ewaida, Abstract)
As per claim 9, Boia and Kaplan disclose the method of claim 8, wherein determining the first evaluation toolset further comprises:
However, Boia in view of Kaplan does not explicitly disclose the limitation:
parsing, by a SAVER analysis model, evaluation toolset usage data associated with the set of SAVERs to detect a set of available evaluation toolsets; and determining, by the SAVER assignment optimization model and based on the evaluation toolset usage data, whether one or more evaluation toolsets of the set of available evaluation toolsets satisfies one or more evaluation task requirements of the first set of evaluation task requirements of the first software application evaluation task
Ewaida discloses:
parsing, by a SAVER analysis model, evaluation toolset usage data associated with the set of SAVERs to detect a set of available evaluation toolsets; and determining, by the SAVER assignment optimization model and based on the evaluation toolset usage data, whether one or more evaluation toolsets of the set of available evaluation toolsets satisfies one or more evaluation task requirements of the first set of evaluation task requirements of the first software application evaluation task (When the assigned port scanning service 150 returns a report on the port scanning task, port scanning dispatcher 240 may provide the assigned port scanning service 150 with a target scanning duration, the multiple may be determined based on a record of previous port scan durations for the target device and/or address associated with the port scanning task, Ewaida, col 8, lines 13-20 and col 8, lines 49-50. This relates to the port scanning dispatcher and is about assigning tasks to services while parsing usage and ensuring the selected toolset satisfies the evaluation task's requirements).
A person of ordinary skill in the art before the effective filing date of the claimed invention would have combined Boia and Kaplan by receiving and distributing vulnerability testing tasks (Boia) and distributed discovery of vulnerabilities in applications (Kaplan) with techniques for analyzing network vulnerabilities (Ewaida). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Boia and Kaplan with Ewaida in order to help safeguard against port scanning types of attacks (See Ewaida, Abstract)
As per claim 15, Boia and Kaplan disclose the apparatus of claim 14, wherein the SAVER management circuitry is configured to
However, Boia in view of Kaplan does not explicitly disclose:
determine, based on the first vulnerability detection results, a second software application evaluation task associated with the software application, and
wherein the SAVER assignment optimization model is configured to determine a second SAVER to execute the second software application evaluation task
Ewaida discloses:
determine, based on the first vulnerability detection results, a second software application evaluation task associated with the software application, and (For each of the ports identified as open by the port scanning task, a combination of the address and the open port are used to generate a vulnerability scanning task which gets pushed onto vulnerability scanning queue 250 for processing, Ewaida, col 8, lines 49-60. This indicates that after a first scanning result, the system generates/pushes a vulnerability scanning task. The outcome of the first triggers a follow-on task).
wherein the SAVER assignment optimization model is configured to determine a second SAVER to execute the second software application evaluation task (As vulnerability scanning dispatcher 260 receives each request for a vulnerability scanning task, vulnerability scanning dispatcher 260 pops a vulnerability scanning task off of vulnerability scanning queue 250 and sends the vulnerability scanning task to the assigned one of the one or more vulnerability scanning services 160 for completion, Ewaida, col 10, lines 45-53. Here the dispatcher assigns the newly generated vulnerability scanning task to one of the vulnerabilities scanning services where this corresponds to selecting a resource to perform that follow-on task).
A person of ordinary skill in the art before the effective filing date of the claimed invention would have combined Boia and Kaplan by receiving and distributing vulnerability testing tasks (Boia) and distributed discovery of vulnerabilities in applications (Kaplan) with techniques for analyzing network vulnerabilities (Ewaida). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Boia and Kaplan with Ewaida in order to help safeguard against port scanning types of attacks (See Ewaida, Abstract)
As per claim 17, Boia and Kaplan discloses the apparatus of claim 10, wherein
However, Boia in view of Kaplan does not explicitly disclose:
the SAVER assignment optimization model is configured to determine a first evaluation toolset with which to execute the first software application evaluation task
Ewaida discloses:
the SAVER assignment optimization model is configured to determine a first evaluation toolset with which to execute the first software application evaluation task (Under the supervision of supervisor 210, port scanning dispatcher 240 manages the assignment of port scanning tasks to the one or more port scanning services 150, Ewaida, col 10, lines 17-20)
A person of ordinary skill in the art before the effective filing date of the claimed invention would have combined Boia and Kaplan by receiving and distributing vulnerability testing tasks (Boia) and distributed discovery of vulnerabilities in applications (Kaplan) with techniques for analyzing network vulnerabilities (Ewaida). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Boia and Kaplan with Ewaida in order to help safeguard against port scanning types of attacks (See Ewaida, Abstract)
As per claim 18, Boia in view of Kaplan discloses the apparatus of claim 17, wherein the apparatus further comprises:
However, Boia in view of Kaplan does not explicitly disclose the limitation:
a SAVER analysis model configured to parse evaluation toolset usage data associated with the set of SAVERs to detect a set of available evaluation toolsets, wherein the SAVER assignment optimization model is configured to determine, based on the evaluation toolset usage data, whether one or more evaluation toolsets of the set of available evaluation toolsets satisfies one or more evaluation task requirements of the first set of evaluation task requirements of the first software application evaluation task
Ewaida discloses:
a SAVER analysis model configured to parse evaluation toolset usage data associated with the set of SAVERs to detect a set of available evaluation toolsets, wherein the SAVER assignment optimization model is configured to determine, based on the evaluation toolset usage data, whether one or more evaluation toolsets of the set of available evaluation toolsets satisfies one or more evaluation task requirements of the first set of evaluation task requirements of the first software application evaluation task (When the assigned port scanning service 150 returns a report on the port scanning task, port scanning dispatcher 240 may provide the assigned port scanning service 150 with a target scanning duration, the multiple may be determined based on a record of previous port scan durations for the target device and/or address associated with the port scanning task, Ewaida, col 8, lines 13-20 and col 8, lines 49-50. This relates to the port scanning dispatcher and is about assigning tasks to services while parsing usage and ensuring the selected toolset satisfies the evaluation task's requirements).
A person of ordinary skill in the art before the effective filing date of the claimed invention would have combined Boia and Kaplan by receiving and distributing vulnerability testing tasks (Boia) and distributed discovery of vulnerabilities in applications (Kaplan) with techniques for analyzing network vulnerabilities (Ewaida). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine Boia and Kaplan with Ewaida in order to help safeguard against port scanning types of attacks (See Ewaida, Abstract)
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RAGHAVENDER CHOLLETI whose telephone number is (703) 756-1065. The examiner can normally be reached M-Th 7:30AM -4:30PM EST and variable Fridays.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, RUPAL DHARIA can be reached on (571) 272-3880. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patentcenter for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Respectfully Submitted,
/RAGHAVENDER NMN CHOLLETI/Examiner, Art Unit 2492
/RUPAL DHARIA/Supervisory Patent Examiner, Art Unit 2492