DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-6 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claim 1 recites:
A vulnerability evaluation device comprising a model generation unit and a model evaluation unit,
wherein the model generation unit, comprising one or more processors, is configured to
acquire each of vulnerability data that has been disclosed from a database and an attack code that has been disclosed, and
create a calculation model for obtaining an exploit probability indicating a probability that a vulnerability is exploited according to an elapsed time from a disclosure time point of each of the vulnerability data that has been acquired, as a distribution of the elapsed time from the disclosure time point of each of the vulnerability data that has been acquired to a disclosure time point of the attack code for exploiting the vulnerability, and
the model evaluation unit, comprising one or more processors, is configured to,
in response to an input of the elapsed time from the disclosure time point of the vulnerability data to be evaluated, obtain the exploit probability corresponding to the elapsed time that has been input based on the calculation model created by the model generation unit.
The limitations “create a calculation model for obtaining an exploit probability indicating a probability that a vulnerability is exploited according to an elapsed time from a disclosure time point of each of the vulnerability data that has been acquired, as a distribution of the elapsed time from the disclosure time point of each of the vulnerability data that has been acquired to a disclosure time point of the attack code for exploiting the vulnerability, and the model evaluation unit, comprising one or more processors, is configured to … obtain the exploit probability corresponding to the elapsed time that has been input based on the calculation model created by the model generation unit,” as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of a generic computer component. That is, other than reciting “the model generation unit, comprising one or more processors, is configured to … and the model evaluation unit, comprising one or more processors” nothing in the claim element precludes the step from practically being performed in the mind. For example, but for the “the model generation unit, comprising one or more processors, is configured to … and the model evaluation unit, comprising one or more processor, is configured to,” language, “create a calculation model … obtain the exploit probability …” in the context of this claim encompasses a user, with the aid of pen and paper, manually generating a calculation model and obtaining an exploit probability that corresponds to an input based on the calculation model. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
This judicial exception is not integrated into a practical application. In particular, the additional limitations “wherein the model generation unit, comprising one or more processors, is configured to acquire each of vulnerability data that has been disclosed from a database and an attack code that has been disclosed … in response to an input of the elapsed time from the disclosure time point of the vulnerability data to be evaluated” are merely insignificant extra-solution activities of gathering the vulnerability data, the attack code, and an elapsed time. Moreover, the additional element of using a model generation unit comprising one or more processors and a model evaluation unit comprising one or more processors amounts to no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “wherein the model generation unit, comprising one or more processors, is configured to acquire each of vulnerability data that has been disclosed from a database and an attack code that has been disclosed … in response to an input of the elapsed time from the disclosure time point of the vulnerability data to be evaluated” are merely insignificant extra-solution activities, which are well-understood, routine, conventional activities previously known to the industry. See MPEP 2106.05(d).II.i (“Receiving or transmitting data over a network, e.g., using the Internet to gather data,” OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015)). Moreover, the additional elements of using a model generation unit comprising one or more processors and a model evaluation unit comprising one or more processors amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible.
Dependent claims 2-4 merely recite functional elements to calculate additional information and to obtain an exploit probability based on the additional information. Hence, these claims only recite further elements of the mental step recited in claim 1. The reasons set forth for claim 1 are applicable to claims 2-4, and these claims are ineligible.
Claim 5 is a method that recites steps which correspond to the functional elements recited in claim 1. The reasons set forth for claim 1 is applicable to claim 5, and claim 5 is not patent eligible.
Claim 6 is a computer readable medium claim that recites computer executable instructions for performing the functional elements of claim 1. The recitation of a generic computing device and executable instructions to perform an otherwise ineligible abstract idea is insufficient to limit the claim to patent eligible subject matter. Therefore, claim 6 is not patent eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1 and 4-6 are rejected under 103 as being unpatentable over Li et al. US20220171861 (hereinafter Li ‘861) in view of Abbasi et al. US20170083703 (hereinafter Abbasi ‘703).
As per claim 1, Li ‘861 discloses a vulnerability evaluation device comprising a model generation unit and a model evaluation unit (para 0042-0045 “Predicting the Probability of Exploit”; predictive machine learning models), wherein the model generation unit, comprising one or more processors (para 0056), is configured to
acquire each of vulnerability data that has been disclosed from a database and an 1 [attack code data] that has been disclosed (para 0042, “First, a set of vulnerability features is identified. Then a set of predictive machine learning models is trained. In particular, considering a certain vulnerability management cycle of n days (typically 30 days which means monthly vulnerability assessment and mitigation), one model is trained for each day to predict the probability of exploit by that day (the first day, the second day, etc.). To train the model for the i.sup.th day, the training dataset is relabeled based on whether each vulnerability has exploit code by the i.sup.th day or not. To predict for a new vulnerability, the vulnerability's corresponding features are fed into all the models which will output the exploit probability by each day.”; para 0043, “The 4,325 vulnerabilities whose exploits appear after they are published was used as the dataset for training the prediction model. To keep the dataset's balance, the present invention randomly sampled 4,325 out of the 46,380 vulnerabilities that do not have known exploits and add them to the dataset as well.”), and
create a calculation model for obtaining an exploit probability indicating a probability that a vulnerability is exploited according to an elapsed time from a disclosure time point of each of the vulnerability data that has been acquired, as a distribution of the elapsed time from the disclosure time point of each of the vulnerability data that has been acquired to a disclosure time point of the attack code for exploiting the vulnerability (para 0045, “The present invention predicts the probability of exploit for vulnerabilities by each day. To do this, the present invention builds one neural network model for each day. When building the model for predicting the probability of exploit by day n after vulnerability release, the present invention generates the training data for this model by relabeling the vulnerability dataset. If a vulnerability’s exploit code becomes available within n or less days, it is labeled as 1; otherwise, it is labeled as 0. Then this relabeled dataset is used to train the neural network model and this model will be used to predict the probability of exploit for day n.”), and
the model evaluation unit, comprising one or more processors, is configured to,
in response to an input of the elapsed time from the disclosure time point of the vulnerability data to be evaluated, obtain the exploit probability corresponding to the elapsed time that has been input based on the calculation model created by the model generation unit (para 0045, “Then this relabeled dataset is used to train the neural network model and this model will be used to predict the probability of exploit for day n. If vulnerability patch scheduling is done monthly (as many organizations do), we need to predict the exploit probability and build one model for each day in the following month. Therefore, about 30 neural network models need to be built in each scheduling cycle. Then to predict for a new vulnerability, the vulnerability's corresponding features can be fed into the 30 trained neural models which will output the exploit probability by each day of the 30 days.”).
Li ‘861 does not disclose, but Abbasi ‘703 discloses acquire … an attack code that has been disclosed (para 0071, “Referring now to FIG. 5, an exemplary embodiment of the operability of the threat detection system for generating malware training datasets is shown. Herein, malware samples from a malware repository 500 (e.g., malware samples generated with the electronic device 100 of FIG. 1, malware samples from other sources, etc.) undergo a dynamic analysis (items 510 and 520). The dynamic analysis may conduct virtual processing of the malware samples within one or more virtual machines that include monitoring logic for gathering information associated with those behaviors of the samples being monitored. Alternatively, the dynamic analysis may feature actual processing of the samples with a security agent capturing selected behaviors. Both of these analyses produce corresponding event summaries (item 530). Each of the event summaries includes a chronological sequence of detected behaviors presented in a prescribed format.”; para 0073, “Thereafter, the malware sample is classified based on an analysis of the rule aggregation sequence to a sequence of rules that are associated with known malware”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the device of Li ‘861 such that the model evaluation unit is further configured to acquire an attack code that has been disclosed. One would have been motivated to do so to classify malware as part of a larger group, and thereby be able to improve analysis and mitigation of polymorphic malware.
As per claim 4, Li ‘861 in view of Abbasi ‘703 disclose the vulnerability evaluation device according to claim 1 (supra). Li ‘861 discloses the device further comprising a compromise evaluation unit comprising one or more processors, wherein the compromise evaluation unit is configured to calculate the exploit probability of each of the vulnerability included in a network model by applying the calculation model for obtaining the exploit probability created by the model generation unit to the network model including a plurality of dependency relationships of the vulnerability, and calculate a compromise probability that is a probability that an input final goal of an attacker is achieved from a result of the calculation (para 0019, “In another embodiment, the present invention comprises the step of formulating the baseline scheduling problem where for all the vulnerabilities being considered at the current scheduling cycle, the optimization goal is to minimize the total dynamic risk of the vulnerabilities, where the total risk is defined as the sum of all the vulnerabilities' dynamic risk and each vulnerability's dynamic risk depends on when the vulnerability is scheduled to be patched, under four conditions—each vulnerability needs a certain amount of time to patch, each vulnerability is assigned once to exactly one security operator, one security operator can only patch one vulnerability at a time, and if a patch i depends on another patch j, it cannot be installed until patch j is installed.”; para 0042, “To predict for a new vulnerability, the vulnerability's corresponding features are fed into all the models which will output the exploit probability by each day.”; para 0044, “Feature selection. To train the neural network model, the important features to represent each vulnerability need to be selected. Since CVSS is used to describe the primary characteristic of vulnerabilities, CVSS metrics are used as part of the features which include: attack vector (how the vulnerability is exploited, e.g. through network or local access), attack complexity …”; para 0045, “The present invention builds neural network models to predict the probability of exploit.”; para 0051, “The basic idea is to divide software into groups based on their functions and other relevant factors. Then the scheduling problem can be solved into two phases. In the first phase (group-level scheduling), the vulnerabilities in each software group are considered as one vulnerability to determine the order of groups to be patched. In the second phase (intra-group scheduling), the present invention considers each group separately and schedules the vulnerabilities in each group to determine their order of patching.”).
Claims 5 and 6 are method and computer readable medium claims that correspond to claim 1. Therefore, claims 5 and 6 are rejected over Li ‘861 in view of Abbasi ‘703 for the same reasons as claim 1.
Claims 2 and 3 are rejected under 103 as being unpatentable over Li ‘861 in view of Abbasi ‘703, and further in view of Pfleger de Aguiar et al. US 20180136921 (hereinafter Pfleger de Aguiar ‘921).
As per claim 2, Li ‘861 in view of Abbasi ‘703 disclose the vulnerability evaluation device according to claim 1 (supra). Li ‘861 in view of Abbasi ‘703 do not disclose, but Pfleger de Aguiar ‘921 discloses wherein the model generation unit is configured to calculate, as the calculation model for obtaining the exploit probability, a future exploit probability that is a probability that the vulnerability to be evaluated is to be exploited in the future based on a ratio of the number of samples of all pieces of the vulnerability data and the number of samples of the vulnerability data that can be exploited by the attack code, in addition to the distribution of the elapsed time, and the model evaluation unit is configured to obtain the exploit probability indicating the probability that the vulnerability is exploited by integrating a value of a result of calculation from the elapsed time that has been input and a distribution followed by the elapsed time and a value of the future exploit probability (para 0033, “In embodiment, a patch installation rate for the asset and/or industrial control system is acquired. The operator of a given industrial control system may tend to delay patching or may be vigilant about patching. The rate (e.g., time from patch availability to patching) for all assets of the industrial control system, by type of asset, or by asset is acquired.”; para 0068-69, “In one embodiment, the information output includes information from the patching history of an operator of the industrial control system. Where the average time to patch is used in the modeling, then an additional instantaneous metric may be the probability that the patch has been or will be applied by a given time. Probabilities for risk may be output. For cumulative metrics, the mean accumulated time the asset spent or is predicted to spend with risk above a level until a given time, mean accumulated time with risk above a level until reaching a given state, the probability that a patch has not been applied before reaching a given state, and/or a mean time spent in a given risk state before a patch is applied may be transmitted. The predicted or actual time spent in a given risk (e.g., highest risk) before a patch is applied may help the operator better understand the risk being taken by failure to patch. The model information may allow a more informed choice between patching and manufacturing downtime.” It is noted that the patching history constitutes vulnerability data of all assets with vulnerabilities and time to patch; seen another way, this information inherently conveys the times assets with vulnerabilities were not patched [i.e. number of samples of the vulnerability data that can be exploited by the attack code]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the device of Li ‘861 such that the exploit probability integrates a ratio of the number of samples of all pieces of the vulnerability data and the number of samples of the vulnerability data that can be exploited by the attack code. One would have been motivated to do so to incorporate a patch installation rate as an additional metric to fine tune the risk scores, and thereby establish an improved patch maintenance system as taught by Pfleger de Aguiar ‘921.
As per claim 3, Li ‘861 in view of Abbasi ‘703 disclose the vulnerability evaluation device according to claim 1 (supra). Li ‘861 in view of Abbasi ‘703 do not disclose, but Pfleger de Aguiar ‘921 discloses wherein the model generation unit is configured to generate a calculation model in which the distribution of the elapsed time is approximated by a Weibull distribution, and the model evaluation unit is configured to obtain the exploit probability corresponding to the elapsed time that has been input based on the calculation model approximated by the Weibull distribution instead of the distribution of the elapsed time (para 0032, “Other statistical vulnerability-related information may be acquired for populating the models that determine the state transitions. For example, an average time from disclosure to weaponization of a vulnerability is acquired. Studies or specific information based on release dates of versions of software indicate the time from when a vulnerability is created to when the vulnerability is discovered. An average time from disclosure for all vulnerabilities, vulnerability by type, vulnerability by asset type, or other categorization may be determined. As another example, a time history or exploitation of vulnerabilities is acquired from studies or specific information. The average, median, other probabilistic distribution (e.g., Weibull, exponential, log normal, or combination) or other time history of exploitation of vulnerabilities in general or by categories of vulnerabilities is determined.”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the device of Li ‘861 by using a Weibull distribution for the calculation model as claimed. One would have been motivated to do so as an effective and efficient statistical distribution analysis to model time to failure as known to one of ordinary skill in the art – or as it applies to the device of Li ‘861, time to exploit.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Yoshiaki e al. JP 2019-212143 (provided by Applicant via IDS filed on 5/28/24) discloses a prediction method for predicting damage caused by a cyberattack. Yoshiaki ‘143 discloses:
[0091] The probability distribution 1005 with respect to the number of days elapsed from the date of occurrence of the vulnerability information to the time when the Exploit code is released is a normalized accumulation of the vulnerability information for which the Exploit code has been released with respect to the number of days elapsed since the occurrence of the vulnerability information. It can be calculated by generating a histogram. For example, normalized histogram information is stored in a table or CSV file (elapsed days, normalized cumulative frequency).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JUNG W KIM whose telephone number is (571)272-3804. The examiner can normally be reached Monday-Friday, 10 a.m. - 6 p.m..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amy Cohen Johnson can be reached at 571-272-2238. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JUNG W KIM/Supervisory Patent Examiner, Art Unit 2494
1 It is noted none of the claims recite using the acquired attack code; instead, the claims recite using data related to the attack code; i.e., the disclosure point of the attack code.