Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-20 are pending and rejected. Claims 1 and 13 are amended and independent. No claims have been canceled or added. Amendments to the claims have been accepted.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/19/2025 has been entered.
Response to Arguments
Applicant's arguments filed 12/19/2025 have been fully considered but they are not persuasive.
In response to applicant's arguments against the references individually (regarding how Bellis is prevalence-driven and does not teach corpus-driven bug report mining for the first score), one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
Applicant argues the Labreche teaches the identification of common textual identifiers for topic modeling/labeling and per-vulnerability feature extraction, not that the claimed first score estimates exploit-development likelihood based on statistical patterns mined from a historical corpus of bug reports. Examiner respectfully disagrees, noting that the per-vulnerability feature extraction of Labreche forms statistical patterns based on which the predictive model may be trained (¶48, "The identified or extracted/selected topics, along with the rankings of the importance of such topics for each vulnerability identified will be provided with a series of additional selected features to a labeling system 330, which will be used to further train a machine learning model 350.", ¶17, "Following this, supervised learning is used to train a predictive model for each class, i.e., each type of attack, using historical attack, malware, and exploit data as labels for vulnerabilities. Each model is trained on historical data using the features previously built, and the combination of all model prediction outputs identifies the predicted type(s) of attack.") and that the likelihood of the 'attack' which may be predicted is the development of an exploit (¶44, "Each of the attack class predictors or classifiers can generate a probability value indicating such a likelihood or probability of that particular attack type occurring with respect to the new vulnerability as an output therefrom.", ¶16, "For example, such attacks can include an attack through malware, Advanced Persistent Threats (APT), direct exploitation by a malicious actor, published exploits or weaponized exploits, or other attack types.").
The generated probability of an attack, such as the publishing of the exploit, trained on features extracted from historical bug reports, is synonymous with the claimed first score of the invention. Thus, applicant's argument is unpersuasive.
In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., the 'explicit cascaded dependency') are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
Applicant argues that the 'cascaded dependency' of the second score based on the first score —as claimed— is not taught as a whole by the combination, arguing that Bellis requires/depends on the usage of prevalence features, and that the first score based on corpus-derived bug-report patterns and explicit use of contextual embeddings are absent from Bellis, further arguing that 'grafting' Bellis' dependency into Labreche's environment would still lack the historical bug-report corpus and embedding-based NLP pipeline and that the combination would retain Bellis's prevalence-centric model assumptions.
Examiner respectfully disagrees. Labreche teaches the first score, as shown above. Applicant's argument that Bellis requires/depends on the usage of prevalence features is noted, but Examiner argues that the prevalence features are not a core component of the invention of Bellis. While Bellis indeed discloses embodiments of the invention requiring the prevalence feature (¶104, "At block 300, training data is provided to one or more machine learning computers. The training data comprises a prevalence feature and a developed exploit feature/developed exploit time feature for each software vulnerability in a training set."), However, Bellis does not teach that the prevalence feature is mandatory for any modification of Bellis. From ¶38:
"The training data comprises one or more features corresponding to a first plurality of vulnerabilities that have been selected for training the one or more machine learning computers. The one or more features may comprise one or more prevalence features… The training data may also comprise other features"
While Bellis requires there to be one or more features corresponding to the plurality of vulnerabilities, and has the prevalence feature as the preferred embodiment, other features may be used as well for the required features corresponding to the plurality of vulnerabilities. Although the preference of the prevalence feature is a specific detail, Bellis also teaches that the present disclosure may be practiced without the specific details (Bellis, ¶14). Features that correspond to unstructured data encoded into contextual embeddings are also taught in Bellis (¶42, "However, some features may exist as unstructured data that may or may not undergo feature transformation to enable organization in a structured format. Non-limiting examples of feature transformation involve tokenization, n-grams, orthogonal sparse bigrams, quantile binning, normalization, and Cartesian products of multiple features.". ¶59, ¶76, "Any number of a variety of other vulnerability features may also be collected. Non-limiting examples of such features include the following:… a textual description comprising a summary of a particular vulnerability;"). As such, one of ordinary skill in the art would have a reasonable expectation of success of implementing Labreche's predictive model, trained to extract the tokenized features from the historical data (Labreche, ¶48, ¶6, ¶15), as the first predictive model of Bellis to generate the first score of Bellis, wherein Bellis further teaches the cascaded dependency of the second score based on the first score.
In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., the 'direct triaging using the two or more scores') are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
The claimed invention simply recites 'triaging the first vulnerability with respect to other vulnerabilities using the two or more scores'. This does not indicate that the triaging has only one step. The intermediate risk score-adjustment layer is not recited in the claims, but one of ordinary skill in the art could easily use the risk-score adjustment layer to triage the vulnerability using both score, as is required by the claim.
Applicant argues that the cited art fails to disclose or render obvious the claimed invention because it does not contemplate historical trend analysis from bug reports as a basis for generating predictive score. Applicant further argues that Labreche's approach is 'instance-based', not historical corpus-based. Examiner respectfully disagrees noting that the predictive model is trained on features from a corpus of text that is historical data (Labreche, ¶17), and that the topic/feature data/labels extracted from the corpuses of text that are used to train/input the predictive model to determine the likelihood (i.e., identifying/mining statistical patterns in the historical corpuses of text) of the vulnerability being exploited.
Applicant argues that there is insufficient motivation to combine Labreche with Bellis, further arguing that the Examiner's rationale amounts to hindsight reconstruction using Applicant's disclosure as a blueprint and that the combination integrates two fundamentally different predictive architectures, saying that Bellis does not contemplate accepting prose-derived scores as inputs. Examiner respectfully disagrees, noting that Bellis does, in fact, contemplate accepting prose as inputs (¶42, "However, some features may exist as unstructured data that may or may not undergo feature transformation to enable organization in a structured format. Non-limiting examples of feature transformation involve tokenization, n-grams, orthogonal sparse bigrams, quantile binning, normalization, and Cartesian products of multiple features."), such that a prose-derived first score could act as input to the second predictive model. The architectures of the predictive models of Labreche and Bellis are not fundamentally different: both teach numerical features as input to the first model (the prevalence features of Bellis and the label indicating occurrence of the attack of Labreche (Labreche, ¶8, "The label may indicate that the known attack has occurred for the new vulnerability or known vulnerabilities, such as in relation to a user or a third party", ¶52, "The pre-processing pipeline or feature extraction module 404 or instructions may additionally include numerical feature extraction instructions.")), as well as unstructured prose-data (Bellis, ¶42, Labreche, ¶17). Furthermore, there is plenty of motivation to integrate the cascading pipeline of Bellis into Labreche, as it enables remediation of vulnerabilities even before exploits can be developed (Bellis, ¶58). Thus, there is sufficient motivation to combine Labreche with Bellis to achieve the claimed invention.
Applicant argues that the combination is unsupported and contrary to the teachings of Bellis, arguing that Bellis repeatedly emphasizes prevalence as the key input for its scoring models, that Bellis's architecture is optimized for structured, numeric data, and that the combination is a fundamental departure from Bellis's design philosophy and that it teaches away from the claimed approach. Examiner respectfully disagrees for reasons similar to those stated and cited above, noting that the combination is not a fundamental departure from Bellis's design philosophy, that, though Bellis emphasizes prevalence data, it does not require it nor does Labreche teach away from the prevalence data, and that one of ordinary skill in the art could modify Labreche using Bellis's cascading method to achieve the claimed invention with sufficient motivation to do so and a reasonable expectation of success.
Applicant’s arguments regarding claims 2-20 hinge on the arguments for claim 1, and are likewise unpersuasive.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-20 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Claims 1 and 13 recite 'applying input data to a prediction engine, the input data comprising one or more bug reports of a first vulnerability, and a corpus of bug reports of historical vulnerabilities'. The specification only cites the corpus of bug reports of historical vulnerabilities as being used as training data (see specification filed 7/20/2023, ¶54, ¶63). While training data is input into the model, it is not done so after the model has already been trained, and the specification does not disclose one or more bug reports of a first vulnerability being included in the training data. Thus, it is not reasonably conveyed to one skilled in the relevant art that the inventor, at the time the application was filed, had possession of the claimed invention of claims 1 and 13. Due to their dependency on claims 1 and 13, claims 2-12, 14-20 are similarly rejected for incorporating the subject matter above by reference.
Claim Rejections - 35 USC § 103
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 1, 4-8, 10, 13, and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Labreche (US 20230038196 A1) in view of Bellis (Bellis et al., US 20220207152 A1) and Weber (Weber et al., US 20220116420 A1)
Regarding Claim 1, and substantially claim 13, Labreche teaches a method of predicting risks related to software vulnerabilities, as well as a computing apparatus for predictions related to vulnerabilities, the computing apparatus comprising: a processor; and a memory storing instructions that, when executed by the processor, configures the apparatus to perform the method (Labreche, ¶74) the method comprising: applying input data to a prediction engine, the input data comprising one or more bug reports of a first vulnerability, and a corpus of bug reports of historical vulnerabilities, wherein the one or more bug reports comprise prose that is unstructured data; (Labreche, ¶51, input features to an attack class predictor include the vulnerability's description (bug report). ¶6, the description is analyzed using Natural Language Processing (prose that is unstructured data). ¶47-¶48, "… prior vulnerabilities from a selected time period, (e.g. 10 years, 20 years, or other time period) can be utilized to identify descriptions common with past identified vulnerabilities and to extract or develop common textural identifiers, including words, phrases, or other identifiers. Such vulnerability details 300 are used to generate a series of vulnerability features 320 that will be used by the predictive models or engines of a labeling system or module 330 to develop labels to identify attacks likely to be associated with or brought against the new or newly identified vulnerability…. Each of the topics are identified through the use of a topic model, which can include a natural language processing algorithm configured to identify an underlying concept of a corpus of extracted text from each of the vulnerability descriptions."). Labreche further teaches generating output data in response to the input data being applied to the prediction engine, the output data comprising one or scores including a value for a first score, the first score representing a likelihood of an exploit being developed for the first vulnerability based at least in part on statistical patterns mined from the historical corpus of bug reports of historical vulnerabilities (¶47-¶48, ¶17, vulnerability details such as the description of a vulnerability from ten years ago can be used by a labelling system to train a machine-learning model (based at least in part on statistical patterns mined from the historical corpus of bug reports of historical vulnerabilities) to identify an attack like to be associated with a new vulnerability. ¶16, the attack whose likelihood of taking place can be the publishing of an exploit (thus, the development of an exploit))
Labreche does not teach the rest of the claim 1. In an analogous art, Bellis teaches generating output data in response to the input data being applied to the prediction engine, the output data comprising two or more scores including a value for a first score and a value for a second score, the first score representing a likelihood of an exploit being developed for the first vulnerability and the second score representing a likelihood the first vulnerability will be attacked using said exploit(Bellis, Abstract, the models, in response to input data outputs a prediction of whether an exploit will be developed for each vulnerability, and additionally outputs a prediction of whether an exploit that has yet to be developed will be used in an attack. Fig. 1, Developed Exploit Feature 118, Exploit Development Time Feature 120, Attack Feature 122 represent whether an exploit is developed/can be developed (‘Yes/No’), the amount of time needed to develop an exploit (NULL for if one can’t be developed, numbers of days if can), and the amount of time after exploit development and attack development (Null if Exploit can’t be developed, number of days if can)), wherein the second score is based at least in part on the first score being an input and; triaging the first vulnerability with respect to other vulnerabilities using the two or more scores (Bellis, ¶111, the output data, including the likelihood of exploit development and likelihood of attacks using that exploit, are used to prioritize the remediation of software vulnerabilities. ¶93, the predicted value of a developed exploit feature (i.e. first score) is used as input in a second predictive model to determine whether it would be used in an attack (i.e. second score)).
It would be obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify Labreche using Bellis to use the predicted likelihood of exploits being developed to predict the likelihood of attacks using those developed exploits, and further triaging the vulnerabilities with respect to those predictions, because predicting whether an exploit will be developed and whether the exploit will be used in an attack allows one to prioritize remediating the vulnerabilities according to the risk posed (Bellis, ¶6, ¶7). Doing so would result in an invention where the machine learning model of Bellis integrates the elements of Labreche for the ability to take prose as input data, as Bellis itself notes is possible as a vulnerability feature (Bellis, ¶68-¶73), and where the risk scores of Labreche obtain the benefit of additional analyses Bellis provides.
Although Labreche teaches that the prediction engine comprises a natural language model trained to encode the unstructured data into feature data (Labreche, ¶48), neither Labreche nor Bellis teach that the prediction engine comprises a natural language model trained to encode the unstructured data into contextual embeddings. In an analogous art, Weber teaches that the prediction engine comprises a natural language model trained to encode the unstructured data into contextual embeddings (Weber, ¶53, "As shown in FIG. 2, a method 200 for automated phishing detection and phishing threat mitigation includes receiving one or more electronic communications data S205, feature data extraction and preprocessing S210, generate word embeddings and/or sentence embeddings for communication corpus S220, compute similarity score for target email S230, and [return email with highest score] identify whether remediate cybersecurity threat S240. The method 200 may additionally include computing a machine learning-based phishing threat score S235", ¶68, the model convert text (unstructured data) into word/sentence embeddings that represent similar words with similar meanings (contextual embeddings)). It would be obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify Labreche in view of Bellis using Weber to encode the unstructured data into contextual embeddings because it enhances an expediated triage or remediation of an identifies cybersecurity threat (Weber, ¶54).
Regarding Claim 4, and substantially claim 16, Labreche in view of Bellis and Weber teaches the method of claim 1, wherein: the prediction engine is trained to classify the first vulnerability based on similarities of the first vulnerability to training vulnerabilities, wherein a set of training data used to train the prediction engine comprises training bug reports and the training vulnerabilities, and, in the set of training data, each of the training vulnerabilities is associated with a corresponding training bug report of the training bug reports. (Labreche, ¶7, “The one or more attack type classifiers may be trained via a machine learning model with known vulnerabilities, associated known attack types, and associated other one or more vulnerability features. For example, the machine learning model can be trained using data (e.g. historical data) relating to prior topic determinations for similar vulnerabilities;” (training vulnerabilities) ¶48, “Each of the features 320 generally will comprise or include a combination of properties associated with a vulnerability, along with generation of a topic vector including a series of topics 322 generated from the recognized/selected text from the vulnerability description 302… The identified or extracted/selected topics, along with the rankings of the importance of such topics for each vulnerability identified will be provided with a series of additional selected features to a labeling system 330, which will be used to further train a machine learning model 350.”, (training bug report for training vulnerabilities) topics such as properties of the vulnerability extracted from the vulnerability description (bug report) are labeled (and thus associated with each other) and are used to train a machine learning model to identify similar vulnerabilities).
Regarding Claim 5, Labreche in view of Bellis and Weber teaches the method of claim 4, wherein: the prediction engine has been trained to learn patterns in the training bug reports and the similarities are based, in part, on a degree to which the one or more bug reports matches the learned patterns to determine from among the training vulnerabilities a subset of similar vulnerabilities from the training vulnerabilities (Labreche, ¶47, common textural identifiers such as words, phrases, or other identifies (learned patterns) in the vulnerability description are used by the attack predictor system (which includes the machine learning model/attack class predictor, see Fig. 3) to identify vulnerability descriptions common with the past identified vulnerabilities (subset of similar vulnerabilities from the training vulnerabilities)).
Bellis further teaches that the value of the first score of the first vulnerability are determined based on probabilities that exploits were developed for the subset of similar vulnerabilities; and the value of the second score of the first vulnerability are determined based on probabilities that the exploits were used to attack the subset of similar vulnerabilities (Bellis, ¶97, the training set for the machine learning model uses features including a value for if an exploit was developed for each of the training vulnerabilities and an attack feature for determining if it will be used in an attack on the vulnerabilities. ¶38, the features in the training set are for selected particular vulnerabilities (subset of similar vulnerabilities). ¶60, the features include the categorical (similar) identity of the vulnerability) (see claim 1 for motivation to combine).
Regarding Claim 6, and substantially claim 17, Labreche in view of Bellis and Weber teaches the method of claim 1, wherein the prediction engine comprises one or more machine learning (ML) methods, the one or more ML methods selected from the group consisting of: a transformer neural network (Weber, ¶70, see claim 1 for motivation to combine), a natural language processing method (Labreche, ¶6, "The topics can include a selected number of topics derived from text extracted from the description of the vulnerability (e.g. using Natural Language Processing (NLP) to generate topics based on text derived from prior vulnerabilities). The topics can be developed using machine learning models to create the series of different/separate topics."), a named entity recognition keyword extraction method, a text classification neural network, and a tokenization neural network.
Regarding Claim 7, and substantially claim 18, Labreche in view of Bellis and Weber teaches the method of claim 1, wherein:
in addition to the unstructured data of the one or more bug reports, the input data further comprises structured data including metadata (Labreche, ¶47, ¶48, vulnerability details that user puts in as input include the vulnerability description (unstructured data of the one or more bug reports) as well as data such as a list of vulnerable products, a list of vulnerable configurations, a list of references (metadata), a Bugtraq ID, and a CVSS score (structured data)); the prediction engine generates first predictive information from the structured data (Labreche, ¶64, ¶66, vulnerability features are extracted from the vulnerability details, and numerical features (predictive information) corresponding to the vulnerability features such as the CVSS score (part of structured data) are calculated), the prediction engine generates second predictive information by applying the unstructured data to a second ML method comprising a natural language processing method; (Labreche, ¶6, "The topics can include a selected number of topics derived from text extracted from the description of the vulnerability (e.g. using Natural Language Processing (NLP) to generate topics based on text derived from prior vulnerabilities). The topics can be developed using machine learning models to create the series of different/separate topics.", the topics (predictive information) are generated using a natural language processing method), and a score is generated based on the first predictive information and the second predictive information (Labreche, ¶67 "At block 612, the system will apply, transmit, or provide as an input (determined at block 612), the topic vectors and numerical features to an attack likelihood classifier or class predictor or a first attack likelihood classifier or class predictor. Such an application may result in an output that indicates the likelihood that an attack associated with the attack likelihood classifier or class predictor may occur in relation to the vulnerability.").
Bellis further teaches that the prediction engine generates first predictive information by applying structured data to a first ML method (Bellis, ¶105, a prevalence feature is provided as an input to an ML method to generate predictions (predictive information). ¶43, the prevalence feature can consist of a subset of exploited systems (structured data, as a subset is a structure)); and that the two or more scores are generated based on predictive information (Bellis, ¶105, the prediction indicates whether an exploit will be developed for a software vulnerability (first score). ¶108-¶0111, a further prediction is made, based on the predicted likelihood of exploit development, whether an attack will use the developed exploit (second score))
It would be obvious to one of ordinary skill in the art prior to the effective filing date of the invention to further modify Labreche in view of Bellis and Weber using Bellis to generate predictive information by applying structured data to an ML method and generate the two or more scores based off of it because a machine learning model can be used to correlate the input data (such as prevalence features) to the likelihood of exploit development and the likelihood of future attacks based on the exploit (Bellis, ¶39). Weber further teaches that the natural languages processing method is a transformer neural network (Weber, ¶70) (see claim 1 for motivation to combine).
Regarding Claim 8, and substantially claim 19, Labreche in view of Bellis and Weber teaches the method of claim 1. Bellis further teaches that the prediction engine comprises a first ML method that generates the first score (Bellis, ¶92, "Prediction logic 208 may apply a prediction model for determining whether and/or when an exploit will be developed for a particular software vulnerability"); the prediction engine comprises a second ML method that generates the second score; and the second ML method uses the first score as an input to generate an output comprising the second score (Bellis, ¶92- ¶93, "Thus, values of a developed exploit feature/developed exploit time feature may be predicted. These values may be sent to risk assessment computer(s) 202 as output data 214 or at least some of these values may be used as input data for predicting values of an attack feature. If predicted values of a developed exploit feature/developed exploit time feature are used as input data, prediction logic 208 may apply a prediction model for determining whether an exploit to be developed for a particular software vulnerability will be used in an attack". ¶82, the prediction models are ML methods) (see claim 1 for motivation to combine).
Regarding Claim 10, and substantially claim 20, Labreche in view of Bellis and Weber teaches the method of claim 1. Bellis further teaches applying another input data to the prediction engine, and in response generating the output data comprising another two or more scores including another value of the first score and another value of the second score (Bellis, Abstract, "The one or more models are applied to input data comprising the prevalence feature for each vulnerability of a second plurality of vulnerabilities. Based on the application of the one or more models to the input data, output data is received. The output data indicates a prediction of whether an exploit will be developed for each vulnerability of the second plurality. Additionally or alternatively, the output data indicates, for each vulnerability of the second plurality, a prediction of whether an exploit that has yet to be developed will be used in an attack."); and triaging the second vulnerability with respect to the first vulnerability using the two or more scores and the another two or more scores, such that the second score serves a primary role and the first score serves a secondary role in determining an order in which the second vulnerability is triaged with respect to the first vulnerability (Bellis, ¶111, "In some embodiments, the output data of block 310 is used to adjust a risk score for one or more software vulnerabilities. Risk scores may be used to prioritize remediation of software vulnerabilities. For example, remediation may be prioritized in the following order: (1) software vulnerabilities predicted to have exploits developed for them, where the exploits are predicted to be used in attacks; (2) software vulnerabilities predicted to have exploits developed for them, where the exploits are predicted not to be used in attacks; and (3) software vulnerabilities predicted not to have exploits developed for them. Furthermore, software vulnerabilities predicted to have exploits developed for them may be prioritized according to when exploits are predicted to be developed and/or when attacks are predicted to occur.").
Contextualized by Labreche, where the input data of a vulnerability is a bug report, Labreche in view of Bellis and Weber teaches another input data comprising another bug report of a second vulnerability. It would be obvious to one of ordinary skill in the art prior to the effective filing date of the invention to further modify Labreche in view of Bellis and Weber using Bellis to use the two scores to triage the vulnerabilities because it allows for the prioritization of higher risk vulnerabilities, as not all vulnerabilities can be remediated at the same time (Bellis, ¶6).
Claims 2, 11, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Labreche in view of Bellis and Weber as applied to claims 1 and 13 above, and further in view of Dunn (Dunn et al., US 20230336581 A1)
Regarding Claim 2, and substantially claim 14, Labreche in view of Bellis and Weber teaches the method of claim 1. Labreche in view of Bellis and Weber does not teach the rest of the claim. In an analogous art, Dunn teaches generating, as part of the output data resulting from applying the input data being applied to the prediction engine, a third score representing a likelihood the first vulnerability will become a common vulnerability and exposure (CVE); and (Dunn, ¶47, ¶65, the node exposure score generator takes outputs regarding the device weakness as input and calculates the possibility of future critical vulnerability CVEs); and triaging the first vulnerability with respect to other vulnerabilities using the third score (Dunn, ¶72, ¶74, CVE information is used to intelligently prioritize remediation actions).
Contextualized by Labreche in view of Bellis and Weber, where the vulnerability is additionally triaged with respect to the first and second score, Dunn suggests triaging the first vulnerability with respect to other vulnerabilities using the first score, the second score, and the third score. It would be obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify Labreche in view of Bellis and Weber using Dunn to generate a third score representing a likelihood the vulnerability will become a CVE and triaging vulnerabilities with respect to the CVE because CVEs allow for identification of important vulnerabilities in light of a particular network and the danger the vulnerabilities provide, allowing for the determination of why a particular vulnerability is important and why it should be remediated first or not (Dunn, ¶71).
Regarding Claim 11, Labreche in view of Bellis, Weber, and Dunn teaches the method of claim 2. Bellis further teaches applying another input data to the prediction engine, and in response generating the output data comprising another two or more scores including another value of the first score and another value of the second score (Bellis, Abstract, "The one or more models are applied to input data comprising the prevalence feature for each vulnerability of a second plurality of vulnerabilities. Based on the application of the one or more models to the input data, output data is received. The output data indicates a prediction of whether an exploit will be developed for each vulnerability of the second plurality. Additionally or alternatively, the output data indicates, for each vulnerability of the second plurality, a prediction of whether an exploit that has yet to be developed will be used in an attack."); and triaging the second vulnerability with respect to the first vulnerability using the two or more scores and the another two or more scores, such that the second score serves a primary role and the first score serves a secondary role in determining an order in which the second vulnerability is triaged with respect to the first vulnerability (Bellis, ¶111, "In some embodiments, the output data of block 310 is used to adjust a risk score for one or more software vulnerabilities. Risk scores may be used to prioritize remediation of software vulnerabilities. For example, remediation may be prioritized in the following order: (1) software vulnerabilities predicted to have exploits developed for them, where the exploits are predicted to be used in attacks; (2) software vulnerabilities predicted to have exploits developed for them, where the exploits are predicted not to be used in attacks; and (3) software vulnerabilities predicted not to have exploits developed for them. Furthermore, software vulnerabilities predicted to have exploits developed for them may be prioritized according to when exploits are predicted to be developed and/or when attacks are predicted to occur.").
Contextualized by Labreche, where the input data of a vulnerability is a bug report, Labreche in view of Bellis, Weber, and Dunn teaches another input data comprising another bug report of a second vulnerability. It would be obvious to one of ordinary skill in the art prior to the effective filing date of the invention to further modify Labreche in view of Bellis, Weber and Dunn using Bellis to use the two scores to triage the vulnerabilities because it allows for the prioritization of higher risk vulnerabilities, as not all vulnerabilities can be remediated at the same time (Bellis, ¶6).
Dunn further teaches generating the output data further comprising another value of the third score (Dunn, ¶65, the node exposure score generator takes outputs regarding the device weakness as input and calculates the possibility of future critical vulnerability CVEs. ¶60, vulnerabilities are prioritized compared to other vulnerabilities, such that output is generated for multiple vulnerabilities) such that for triaging the vulnerabilities, the third score serves a primary role (Dunn, ¶71, the existing CVEs in the network are used to prioritize which vulnerabilities are most important to remediate). Contextualized by Bellis, where the second score serves a primary role and the first score serves a secondary role, Dunn suggests that the third score serves a primary role, the second score serves a secondary role and the first score serves a tertiary role in triaging vulnerabilities. It would be obvious to one of ordinary skill in the art prior to the effective filing date of the invention to further modify Labreche in view of Bellis, Weber, and Dunn using Dunn to have the third score serve a primary role in triaging vulnerabilities because CVEs allow for identification of important vulnerabilities in light of a particular network and the danger the vulnerabilities provide, allowing for the determination of why a particular vulnerability is important and why it should be remediated first or not (Dunn, ¶71)
Claims 3 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Labreche in view of Bellis and Weber as applied to claims 1 and 13 above, and further in view of Schmidtler (Schmidtler et al., US 20180013772 A1)
Regarding Claim 3, and substantially claim 15, Labreche in view of Bellis and Weber teaches the method of claim 1. Weber further teaches training through reinforcement learning (¶42, the machine learning models can use reinforcement learning) (see claim 1 for motivation to combine). Labreche in view of Bellis and Weber does not teach the rest of claim 3. In an analogous art, Schmidtler teaches signaling the values of the two or more scores to a user; receiving user feedback regarding the values of the two or more scores; and performing training based on the received user feedback to update the prediction engine (Schmidtler, ¶38, feature vectors scores (the values of two or more scores) are made accessible to a human judge/expert for evaluation (signaled to a user), before the model training component receives feedback about those scores (receiving user feedback regarding the values of the two or more scores) and uses it to train the predictive model (performing training based on the received user feedback to update the prediction engine).
It would be obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify Labreche in view of Bellis and Weber using Schmidtler to obtain user feedback regarding the score and train the prediction engine based on the feedback because models that are properly trained produce more accurate results (Schmidtler, ¶24).
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Labreche in view of Bellis and Weber as applied to claim 1 above, and further in view of Cam (US 20160248794 A1)
Regarding Claim 9, Labreche in view of Bellis and Weber teaches the method of claim 1. Bellis further teaches applying another input data to the prediction engine, and in response, generating the output data comprising another two or more scores including another value of the first score and another value of the second score (Bellis, Abstract, "The one or more models are applied to input data comprising the prevalence feature for each vulnerability of a second plurality of vulnerabilities. Based on the application of the one or more models to the input data, output data is received. The output data indicates a prediction of whether an exploit will be developed for each vulnerability of the second plurality. Additionally or alternatively, the output data indicates, for each vulnerability of the second plurality, a prediction of whether an exploit that has yet to be developed will be used in an attack."); and triaging the second vulnerability with respect to the first vulnerability using the values of the two or more scores and using the another values of the two or more scores (Bellis, ¶111, both probabilities of exploit development and attacks using that exploit (first and second score) may be used to prioritize the remediation of a software vulnerability).
Contextualized by Labreche, where the input data of a vulnerability is a bug report, Labreche in view of Bellis and Weber teaches another input data comprising another bug report of a second vulnerability. It would be obvious to one of ordinary skill in the art prior to the effective filing date of the invention to further modify Labreche in view of Bellis and Weber using Bellis to use the two scores to triage the vulnerabilities because it allows for the prioritization of higher risk vulnerabilities, as not all vulnerabilities can be remediated at the same time (Bellis, ¶6).
Although Labreche in view of Bellis and Weber teaches prioritizing exploits with attacks over exploits without attacks (Bellis, ¶6, ¶111), Labreche in view of Bellis and Weber does not teach the rest of the claim. In an analogous art, Cam teaches that based on the values and another value for the primary score,
the primary score are assigned to bins that correspond to respective ranges for the primary score; whichever of the first vulnerability and the second vulnerability is assigned to a bin that corresponds to a higher value for the primary score is ranked higher; and when the first vulnerability and the second vulnerability are assigned to a same bin, then whichever of the first vulnerability and the second vulnerability has a higher value for the secondary score is ranked higher (Cam, ¶63, "Multiple node attributes are ranked as primary attributes, secondary attributes, tertiary attributes, and the like. During the ancestor nomination process, the primary attribute values of nodes are considered first for comparison and nomination. If the primary attribute values of two nodes happen to be very close to each other according to its threshold, then the next high-ranking attribute (i.e., secondary attribute) values of these two nodes are compared to break the tie. This tie-breaking process is applied until the tie is broken", the multiple attributes (two or more scores) are used to rank the nodes, where the primary attribute (primary score) has a threshold of closeness (bins that correspond to respective ranges) where they are considered to be tied (assigned to the bin), where the secondary attribute (secondary score) is compared to break the tie.).
Contextualized by Bellis, where attacks with exploit are prioritized over vulnerabilities that have exploits but haven't been attacked, such that the second score is prioritized over the first score (and thus, the second score is considered the primary score as it is considered first), and where ranking is performed to triage vulnerabilities for remediation, Cam suggests the rest of the claimed limitations.
It would be obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify Labreche in view of Bellis and Weber using Cam to achieve the claimed invention because it would allow someone to break ties between two vulnerabilities that are too close (Cam, ¶63)
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Labreche in view of Bellis and Weber as applied to claim 1 above, and further in view of Bulut (Bulut et al., US 20210075814 A1)
Regarding Claim 12, Labreche in view of Bellis and Weber teaches the method of claim 1. Labreche in view of Bellis and Weber does not teach the rest of claim 12. In an analogous art, Bulut teaches applying the input data to the prediction engine further generates the output data comprising explanations of an attack mode for the vulnerability, wherein the explanations include information selected from the group consisting of tactics information, techniques information, procedures information, access vector information, attack complexity information, authentication information, confidentiality information; integrity information, and availability information (Bulut, ¶59, "Metric assignment component 108 can employ such a model defined above (e.g., LSTM, GRU, CNN, etc.) to assign one or more risk assessment metrics based on vulnerability data of a compliance process, where such one or more risk assessment metrics can comprise exploitability metrics, impact metrics, and/or another risk assessment metric of a compliance process vulnerability scoring system. For example, such one or more risk assessment metrics can comprise exploitability metrics and/or impact metrics including, but not limited to, attack vector (AV), access complexity (AC), authentication (Au), confidentiality impact (C), integrity impact (I), availability impact (A), and/or another exploitability metric and/or impact metric of a compliance process vulnerability scoring system such as, for instance, the Common Vulnerability Scoring System (CVSS) and/or another compliance process vulnerability scoring system.").
It would be obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify Labreche in view of Bellis and Weber using Bulut to achieve the claimed invention because it allows for the protection of operating system resources (Bulut, ¶105, "In some embodiments, system 400c can comprise an illustration of a health check control example that can be implemented using one or more embodiments of the subject disclosure described herein to protect one or more operating system resources").
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Chawla (Chawla et al., US 20170300698 A1) teaches applying input data to a prediction engine, the input data comprising one or more bug reports of a first vulnerability, and a corpus of bug reports of historical vulnerabilities, wherein the one or more bug reports comprise prose that is unstructured data, and wherein the prediction engine comprises a natural language model trained to encode the unstructured data into contextual embeddings; and generating output data in response to the input data being applied to the prediction engine (¶6, ¶7, vulnerability reports are compared to historical vulnerability reports to see if they are sufficiently close. ¶61, ¶62, the similarity is calculated by tokenizing the reports)
Shah (SHAH et al., US 20240037157 A1) teaches a vulnerability corpus with historical vulnerabilities (¶28), as well as a transformer-based neural network model that analyzes text to find the context of words in the passages and whether they have similarities to known vulnerabilities to determine the likelihood that events will occur (¶29-¶32).
Pan (PAN et al., WO 2023092511 A1) teaches a matching module that takes an issue report (bug report) and an anchor content representing known vulnerabilities (corpus of historical bug reports) and determines the similarity between them using a transformer-based neural network model and matching module (Fig. 6B, Fig. 7B)
Inagaki (Inagaki et al., US 20180032736 A1) teaches the usage of multiple scores calculated for a vulnerability, such as the vulnerability score and the severity score, to schedule/prioritize the remediation of the vulnerability (Fig. 3).
Tarrant (Tarrant et al., US 20220100868 A1) teaches performing triage on identified security vulnerabilities using machine learning (¶18) and determines, using historical data, if a vulnerability is exploitable (¶34).
Cohen (Cohen et al., US 20050193430 A1) teaches prioritizing fixes for vulnerabilities based on the possibility of the vulnerabilities being attacked or exploited (¶66, ¶83)
Trepagnier (Trepagnier et al., US 20200012796 A1) teaches using natural language descriptions of vulnerabilities, both current and previous, and uses the similarity between the two descriptions (¶71) as a method for prioritizing the vulnerability, further prioritizing it based on how available a developed exploit would be to attackers (¶27) as well as using the CVSS score of the known vulnerabilities (¶73)
Doyle (DOYLE et al., US 20200210590 A1) teaches performing natural language processing on descriptions of historic vulnerabilities (¶49) and calculating the degree to which one or more exploits have already been developed for the software vulnerability (¶7), developing threat scores that indicate the likelihood a software vulnerability will be targeted for an exploitation attempt and/or attack success (¶65)
Smyth (SMYTH et al., US 20180034842 A1) teaches using historical information and current vulnerability information and processing them for input into predictive models (¶29), where the likelihood that the current vulnerability will be exploited is determined (¶50).
Wei (Wei Zheng et al., 'Automatically Identifying Bug Reports with Tactical Vulnerabilities by Deep Feature Learning', 2021) teaches using a corpus of historical vulnerabilities to create a pre-trained word embedding for training a predictive model, which can take in bug reports to detect reports with tactical vulnerabilities (p.3, Fig. 1, p.5, 'D. Apply Bug Report Detection', "Phase 1, we got a well-trained model, which automatically learns the tactical vulnerability characteristics from the vulnerability description. We can apply the well-trained model to detect bug reports with tactical vulnerabilities")
Mustafizur (Mustafizur R Shahid et al., 'CVSS-BERT: Explainable Natural Language Processing to Determine the Severity of a Computer Security Vulnerability from its Description', 2021) teaches using a BERT model (that processes natural language), trained on natural language texts, to determine the CVSS metrics of a bug report with high accuracy.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMIR MAHDI HAJIABBASI whose telephone number is (703)756-5511. The examiner can normally be reached M-F 7:30-5 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Catherine Thiaw can be reached at (571) 270-1138. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.M.H./
Amir Mahdi Hajiabbasi Examiner, Art Unit 2407
/Catherine Thiaw/ Supervisory Patent Examiner, Art Unit 2407 1/9/2026