Prosecution Insights
Last updated: April 19, 2026
Application No. 18/894,856

AUTOMATIC GENERATION OF VULNERABILITY METRICS USING MACHINE LEARNING

Non-Final OA §102§112§DP
Filed
Sep 24, 2024
Examiner
ALMAGHAYREH, KHALID M
Art Unit
2492
Tech Center
2400 — Computer Networks
Assignee
Tenable Inc.
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
208 granted / 248 resolved
+25.9% vs TC avg
Strong +25% interview lift
Without
With
+25.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
13 currently pending
Career history
261
Total Applications
across all art units

Statute-Specific Performance

§101
6.2%
-33.8% vs TC avg
§103
47.5%
+7.5% vs TC avg
§102
18.8%
-21.2% vs TC avg
§112
22.1%
-17.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 248 resolved cases

Office Action

§102 §112 §DP
DETAILED ACTION This communication responsive to the Application No. 18/894,856 filed on September 24, 2024. Claims 1-13 are pending and are directed towards AUTOMATIC GENERATION OF VULNERABITY METRICS USING MACHINE LEARNING. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The abstract of the disclosure is objected to because it has acronyms (CVEs, US NVD) that should be defined when used. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b). The use of the term “MICROSOFT”, which is a trade name or a mark used in commerce, has been noted in this application. The term should be accompanied by the generic terminology; furthermore the term should be capitalized wherever it appears or, where appropriate, include a proper symbol indicating use in commerce such as ™, SM , or ® following the term. Although the use of trade names and marks used in commerce (i.e., trademarks, service marks, certification marks, and collective marks) are permissible in patent applications, the proprietary nature of the marks should be respected and every effort made to prevent their use in any manner which might adversely affect their validity as commercial marks. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 12 and 13 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 12 recites the limitation “wherein when one or more vendor-provided vulnerability vectors are received, none of the vendor-provided vulnerability vectors is used to generate the one or more target vulnerability vectors” which is vague and not clear. It is not understood how vendor-provided vulnerability vectors are received or involved in the generation process of the target vulnerability vectors. Claim 13 recites undefined acronym “ML” which render the claim indefinite. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-13 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 14-24 and 26 of U.S. Patent No. 12,143,412 B2 in view of Sharma et al. US 2020/0057858 A1 (hereinafter “Sharma”) titled “Open source vulenerability prediction with machine learning ensemble”. Adding the limitations (using a machine learning ML model to generate vulnerability metrics) to the instant claims do not make the instant application patentably distinct, and it is obvious over the related claims in the application and over the primary reference Sharma (Vulnerability data 410 and classified vulnerability data 411 are both partitioned into K datasets which are fed to a K-stacking algorithm to generate K sets of training data 412 and testing data 414, respectively. The training data 412 is used to train a set of classifiers 416a-f. The training data 412 and the testing data 414 are fed to the classifiers 416a-f to generate classifier data 418a-f. The K sets of the classifier data 418a-f are combined as probability vectors folded together to form K-fold classifier data 420 and classified vulnerability data 411 are fed to a logistic regressor 422 which outputs a trained logistic regression model 424. An AVIS 426 with an AVIS controller 428, a data scraper 430, and a natural language processor 432 combines the logistic regression model 424 and a set of trained classifiers 422 to form the classifier ensemble 434. Sharma, para [0047]-[0051] and Fig. 4 elements 416a-f). It would have been obvious to one having ordinarily skill in the art before the effective filing date of the claimed invention, to include using a machine learning ML model to generate vulnerability metrics for the obvious reason of enhancing the accuracy and the flexibility of the process. The same rationale applies to the system equivalent claim and other dependent claims. Furthermore, the omission of an element with a corresponding loss of function is an obvious expedient. See In re Karlson, 136 USPQ 184 and Ex parte Rainu, 168 USPQ 375. In particular the omission of the allowable subject mater from claims 1 and 13 “wherein the vulnerability metrics generator comprises a vulnerability metrics generation model trained on a training dataset to generate vulnerability vectors, the training dataset having been obtained from one or more training vulnerability data corresponding to one or more training vulnerabilities, each training vulnerability data comprising a training vulnerability description and one or more training vulnerability vectors of the corresponding training vulnerability, the training vulnerability description comprising a textual description, and each training vulnerability vector comprising one or more training vulnerability metrics and corresponding one or more metric values,…, and wherein the target vulnerability is not a part of any of the one or more training vulnerabilities.” Is an obvious expedient. The table below compares the instant application with issued patent. Instant Application: 18/894,856 Related U.S. Patent No. 12,143,412 B2 Claim 1: A method to generate vulnerability metrics, the method comprising: receiving a target vulnerability description of a target vulnerability, the target vulnerability description comprising a textual description; and generating, by a vulnerability metrics generation machine learning (ML) model, one or more target vulnerability vectors of the target vulnerability as an inference that is based on providing the target vulnerability description as an input to the vulnerability metrics generation ML model, each target vulnerability vector comprising one or more target vulnerability metrics and corresponding one or more metric values, wherein the target vulnerability is already identified as a vulnerability before generating the one or more target vulnerability vectors. Claim 2: The method of claim 1, wherein the vulnerability metrics generation ML model has been trained on a training dataset to generate vulnerability vectors, the training dataset having been obtained from one or more training vulnerability data corresponding to one or more training vulnerabilities, each training vulnerability data comprising a training vulnerability description and one or more training vulnerability vectors of the corresponding training vulnerability, the training vulnerability description comprising a textual description, and each training vulnerability vector comprising one or more training vulnerability metrics and corresponding one or more metric values, and wherein the target vulnerability is not a part of any of the one or more training vulnerabilities previously seen by the vulnerability metrics generation ML model. Claim 14: A method to generate vulnerability metrics, the method comprising: receiving a target vulnerability description of a target vulnerability, the target vulnerability description comprising a textual description; and generating, by a vulnerability metrics generator, one or more target vulnerability vectors of the target vulnerability based on the target vulnerability description, each target vulnerability vector comprising one or more target vulnerability metrics and corresponding one or more metric values, wherein the vulnerability metrics generator comprises a vulnerability metrics generation model trained on a training dataset to generate vulnerability vectors, the training dataset having been obtained from one or more training vulnerability data corresponding to one or more training vulnerabilities, each training vulnerability data comprising a training vulnerability description and one or more training vulnerability vectors of the corresponding training vulnerability, the training vulnerability description comprising a textual description, and each training vulnerability vector comprising one or more training vulnerability metrics and corresponding one or more metric values, and wherein the target vulnerability is already identified as a vulnerability before generating the one or more target vulnerability vectors, and wherein the target vulnerability is not a part of any of the one or more training vulnerabilities. Claim 3: The method of claim 2, wherein the one or more training vulnerabilities include at least one common vulnerability exposure (CVE), and wherein for the at least one CVE, each vulnerability vector is a common vulnerability scoring system (CVSS) vector Claim 15: The method of claim 14, wherein the one or more training vulnerabilities include at least one common vulnerability exposure (CVE), and wherein for the at least one CVE, each vulnerability vector is a common vulnerability scoring system (CVSS) vector. Claim 4: The method of claim 2, wherein for each training vulnerability vector of at least one training vulnerability data, the one or more training vulnerability metrics of that training vulnerability vector comprise any one or more of an attack vector (AV), an attack complexity (AC), a privileges required (PR), a user interaction (UI), a scope (S), a confidentiality (C), an integrity (I), and/or an availability (A), and/or wherein the one or more target vulnerability metrics of the target vulnerability vector include any one or more of the AV, the AC, the PR, the UI, the S, the C, the I, and/or the A. Claim 16: The method of claim 1, wherein for each training vulnerability vector of at least one training vulnerability data, the one or more training vulnerability metrics of that training vulnerability vector comprise any one or more of an attack vector (AV), an attack complexity (AC), a privileges required (PR), a user interaction (UI), a scope (S), a confidentiality (C), an integrity (I), and/or an availability (A), and/or wherein the one or more target vulnerability metrics of the target vulnerability vector include any one or more of the AV, the AC, the PR, the UI, the S, the C, the I, and/or the A. Claim 5: The method of claim 1, wherein generating the one or more target vulnerability vectors of the target vulnerability comprises: extracting one or more target features from the target vulnerability description; and generating the one or more target vulnerability vectors based on the one or more extracted target features using the vulnerability metrics generation model Claim 17: The method of claim 14, wherein generating the one or more target vulnerability vectors of the target vulnerability comprises: extracting one or more target features from the target vulnerability description; and generating the one or more target vulnerability vectors based on the one or more extracted target features using the vulnerability metrics generation model. Claim 6: The method of claim 5, wherein natural language processing (NLP) is used to extract the target features from the target vulnerability description. Claim 18: The method of claim 17, wherein natural language processing (NLP) is used to extract the target features from the target vulnerability description. Claim 7: The method of claim 5, wherein the extracted target features are tokenized, counted and normalized. Claim 19: The method of claim 17, wherein the extracted target features are tokenized, counted and normalized. Claim 8: The method of claim 5, wherein generating the one or more target vulnerability vectors comprises: determining each target vulnerability metric of each target vulnerability vector and its corresponding metric value in isolation; and combining the separately determined target vulnerability metrics and their corresponding metric values for each target vulnerability vector. Claim 20: The method of claim 17, wherein generating the one or more target vulnerability vectors comprises: determining each target vulnerability metric of each target vulnerability vector and its corresponding metric value in isolation; and combining the separately determined target vulnerability metrics and their corresponding metric values for each target vulnerability vector. Claim 9: The method of claim 1, further comprising: generating one or more target vulnerability scores of the target vulnerability based on the one or more target vulnerability vectors. Claim 21: The method of claim 14, further comprising: generating one or more target vulnerability scores of the target vulnerability based on the one or more target vulnerability vectors. Claim 10: The method of claim 1, wherein when the one or more target vulnerability vectors include at least first and second target vulnerability vectors, the first target vulnerability vector is a vulnerability vector of a first vulnerability scoring version and the second target vulnerability vector is a vulnerability vector of a second vulnerability scoring version different from the first vulnerability scoring version Claim 22: The method of claim 14, wherein when the one or more target vulnerability vectors include at least first and second target vulnerability vectors, the first target vulnerability vector is a vulnerability vector of a first vulnerability scoring version and the second target vulnerability vector is a vulnerability vector of a second vulnerability scoring version different from the first vulnerability scoring version. Claim 11: The method of claim 1, further comprising: determining one or more confidence levels of the one or more target vulnerability vectors; and for each target vulnerability vector, if the confidence level of that target vulnerability vector is below a threshold confidence level, discarding that target vulnerability vector. Claim 23: The method of claim 14, further comprising: determining one or more confidence levels of the one or more target vulnerability vectors; and for each target vulnerability vector, if the confidence level of that target vulnerability vector is below a threshold confidence level, discarding that target vulnerability vector. Claim 12: The method of claim 1, wherein when one or more vendor-provided vulnerability vectors are received, none of the vendor-provided vulnerability vectors is used to generate the one or more target vulnerability vectors. Claim 24: The method of claim 14, wherein when one or more vendor-provided vulnerability vectors are received, the vendor-provided vulnerability vectors are ignored when generating the one or more target vulnerability vectors. Claim 13: A vulnerability metrics generator, comprising: a memory; and at least one processor coupled to the memory, wherein the memory and the at least one processor are configured to: receive a target vulnerability description of a target vulnerability, the target vulnerability description comprising a textual description; and generate one or more target vulnerability vectors of the target vulnerability as an inference that is based on providing the target vulnerability description as an input to the vulnerability metrics generation ML model, each target vulnerability vector comprising one or more target vulnerability metrics and corresponding one or more metric values, wherein the target vulnerability is already identified as a vulnerability before generating the one or more target vulnerability vectors. Claim 26: A vulnerability metrics generator, comprising: a memory; and at least one processor coupled to the memory, wherein the memory and the at least one processor are configured to: receive a target vulnerability description of a target vulnerability, the target vulnerability description comprising a textual description; and generate one or more target vulnerability vectors of the target vulnerability based on the target vulnerability description, each target vulnerability vector comprising one or more target vulnerability metrics and corresponding one or more metric values, wherein the vulnerability metrics generator comprises a vulnerability metrics generation model trained on a training dataset to generate vulnerability vectors, the training dataset having been obtained from one or more training vulnerability data corresponding to one or more training vulnerabilities, each training vulnerability data comprising a training vulnerability description and one or more training vulnerability vectors of the corresponding training vulnerability, the training vulnerability description comprising a textual description, and each training vulnerability vector comprising one or more training vulnerability metrics and corresponding one or more metric values, and wherein the target vulnerability is already identified as a vulnerability before generating the one or more target vulnerability vectors, and wherein the target vulnerability is not a part of any of the one or more training vulnerabilities. Allowable Subject Matter Claim 2 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claims 3 and 4 objected by dependency. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1, and 5-13 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Sharma et al. US 2020/0057858 A1 (hereinafter “Sharma”). As per claims 1 and 13, Sharma teaches a method to generate vulnerability metrics, the method comprising receiving a target vulnerability description of a target vulnerability, the target vulnerability description comprising a textual description (The NLP 112 takes the CBR data 110 as input and performs language processing operations to generate the vulnerability vectors 114. The vulnerability vectors 114 comprise frequency counts of tokens (words or phrases) from the CBR data 110 which indicate the presence of a vulnerability. The NLP 112 generates the vulnerability vectors 114 by recording words or phrases in the CBR data 110 indicative of vulnerabilities. Sharma, para [0018]); and generating, by a vulnerability metrics generation machine learning (ML) model, one or more target vulnerability vectors of the target vulnerability as an inference that is based on providing the target vulnerability description as an input to the vulnerability metrics generation ML model (The generator 116 uses the vulnerability vectors 114 and the vulnerability data 102 to generate a classifier ensemble based on training a number N of classifiers with a K-splitting algorithm and using logistic regression. Using N classifiers improves performance for imbalanced datasets. Sharma, para [0019]-[0020]) (Vulnerability data 410 and classified vulnerability data 411 are both partitioned into K datasets which are fed to a K-stacking algorithm to generate K sets of training data 412 and testing data 414, respectively. The training data 412 is used to train a set of classifiers 416a-f. The training data 412 and the testing data 414 are fed to the classifiers 416a-f to generate classifier data 418a-f. The K sets of the classifier data 418a-f are combined as probability vectors folded together to form K-fold classifier data 420 and classified vulnerability data 411 are fed to a logistic regressor 422 which outputs a trained logistic regression model 424. An AVIS 426 with an AVIS controller 428, a data scraper 430, and a natural language processor 432 combines the logistic regression model 424 and a set of trained classifiers 422 to form the classifier ensemble. Sharma, para [0047]), each target vulnerability vector comprising one or more target vulnerability metrics and corresponding one or more metric values (The generator 116 trains and tests the N classifiers to accurately identify vulnerabilities by classifying the vulnerability vectors 114 and validating identifications with the vulnerability data 102. This training and testing generates for each vulnerability vector a prediction from each trained classifier. Assuming S of the vulnerability vectors 114 were in the testing data, the generator 116 generates S sets of N predictions. The generator 116 then uses the predictions made by the N trained classifiers with the vulnerability data 102 to train a logistic regression model. The generator 116 forms the ensemble 118 by combining the trained N classifiers with the trained logistic regression model. To combine, the generator 116 can create program code that linearly combines output probabilities for a software component from the trained N classifiers to the logistic regression model and program code to fit the linearly combined probabilities to the probability distribution of the logistic regression model. Sharma, para [0020])(the generator stores the predictions of the N.sup.th trained classifier for samples in the D.sup.th dataset fold as the N.sup.th component of the S.sup.th probability vector. This iterative training and testing of each classifier for each dataset fold yields S×M predictions which are stored as probability vectors. The stored probabilities can be associated with the corresponding classification labels for the samples within the probability vectors or a separate data structure. Embodiments can associate the classification labels with the probabilities different. Sharma, para [0040]) wherein the target vulnerability is already identified as a vulnerability before generating the one or more target vulnerability vectors (The vulnerability database 204 contains the open source library data 206a-d for four different open source libraries. The open source library data 206a comprises a table of version histories for the open source library with associated vulnerability data. The vulnerability data may comprise a number of vulnerabilities for each version of the open source library and probabilities associated with each identified vulnerability. Sharma, para [0027]) (The system uses a K-folding cross validation algorithm to partition or split a sample dataset and then train and test a set of N classifiers with the split dataset. At each test iteration, trained models of the set of classifiers generate probabilities/predictions that a sample has a flaw, resulting in a set of N probabilities or predictions for each sample in the test data. Sharma, para [0013]). As per claim 5, Sharma teaches the method of claim 1, generating the one or more vulnerability vectors of the target vulnerability comprises: extracting one or more target features from the target vulnerability description (Commit messages and bug reports, titles, descriptions, comments, numbers of comments, attachment numbers, labels, dates of creation, and last edited dates may be extracted by the data scraper 108 as the CBR data. Sharma, para [0017]); and generating the one or more target vulnerability vectors based on the one or more extracted target features using the vulnerability metrics generation model (Among the selected features, titles, descriptions, comments, and labels are text features that contain semantic information which can be processed by the NLP 112. The numbers of comments and attachments are numeric features which may reflect the attention and resolution a bug report received. Similarly, the difference between the created and last edited date reflects the resolution time, allowing vulnerabilities to be correlated with corresponding versions of open source libraries. Sharma, para [0017]) (Vulnerability data 410 and classified vulnerability data 411 are both partitioned into K datasets which are fed to a K-stacking algorithm to generate K sets of training data 412 and testing data 414, respectively. The training data 412 is used to train a set of classifiers 416a-f. The training data 412 and the testing data 414 are fed to the classifiers 416a-f to generate classifier data 418a-f. The K sets of the classifier data 418a-f are combined as probability vectors folded together to form K-fold classifier data 420 and classified vulnerability data 411 are fed to a logistic regressor 422 which outputs a trained logistic regression model 424. An AVIS 426 with an AVIS controller 428, a data scraper 430, and a natural language processor 432 combines the logistic regression model 424 and a set of trained classifiers 422 to form the classifier ensemble. Sharma, para [0047]). As per claim 6, Sharma teaches the method of claim 5, wherein natural language processing (NLP) is used to extract the target features from the training vulnerability description (The NLP 112 takes the CBR data 110 as input and performs language processing operations to generate the vulnerability vectors 114. The vulnerability vectors 114 comprise frequency counts of tokens (words or phrases) from the CBR data 110 which indicate the presence of a vulnerability. The NLP 112 generates the vulnerability vectors 114 by recording words or phrases in the CBR data 110 indicative of vulnerabilities. The NLP 112 may have been manually trained by embedding words known to be indicative of vulnerabilities into a vector space to generate relationships between different words. Words or phrases in the CBR data 110 are then evaluated based on their proximity or association with the words embedded in the vector space. Sharma, para [0018]). As per claim 7, Sharma teaches the method of claim 5, wherein the extracted target features are tokenized, counted and normalized (The NLP 112 takes the CBR data 110 as input and performs language processing operations to generate the vulnerability vectors 114. The vulnerability vectors 114 comprise frequency counts of tokens (words or phrases) from the CBR data 110 which indicate the presence of a vulnerability. The NLP 112 generates the vulnerability vectors 114 by recording words or phrases in the CBR data 110 indicative of vulnerabilities. The NLP 112 may have been manually trained by embedding words known to be indicative of vulnerabilities into a vector space to generate relationships between different words. Words or phrases in the CBR data 110 are then evaluated based on their proximity or association with the words embedded in the vector space. Sharma, para [0018])( Based on the embedding of the CBR data 512 in the vector space 522, the NLP 514 generates the vulnerability data 516 by summing the vectors of each word or phrase in the CBR data 512 to generate the vulnerability vectors 518, which comprise the vectors strong_vuln_count, medium_vuln_count, and weak_vuln_count. These vectors are generated by summing the associations of each word in the CBR data 512 with each word in the vulnerability detection vectors 520A, 520B, and 520C, respectively. Sharma, para [0059]). As per claim 8, Sharma teaches the method of claim 1, wherein generating the one or more target vulnerability vectors comprises: determining each target vulnerability metric of each target vulnerability vector and its corresponding metric value in isolation (Vulnerability data 410 and classified vulnerability data 411 are both partitioned into K datasets which are fed to a K-stacking algorithm to generate K sets of training data 412 and testing data 414, respectively. The training data 412 is used to train a set of classifiers 416a-f. The training data 412 and the testing data 414 are fed to the classifiers 416a-f to generate classifier data 418a-f. The K sets of the classifier data 418a-f are combined as probability vectors folded together to form K-fold classifier data 420 and classified vulnerability data 411 are fed to a logistic regressor 422 which outputs a trained logistic regression model 424. An AVIS 426 with an AVIS controller 428, a data scraper 430, and a natural language processor 432 combines the logistic regression model 424 and a set of trained classifiers 422 to form the classifier ensemble 434. Sharma, para [0047]); and combining the separately determined target vulnerability metrics and their corresponding metric values for each target vulnerability vector (To combine, the generator 116 can create program code that linearly combines output probabilities for a software component from the trained N classifiers to the logistic regression model and program code to fit the linearly combined probabilities to the probability distribution of the logistic regression model. Sharma, para [0020])( the generator stores as a classifier ensemble a combination of the M trained classifiers and the trained logistic regression model that satisfies the accuracy threshold, which may be based on specified precision and recall rates. Sharma, para [0045]). As per claim 9, Sharma teaches the method of claim 1, further comprising: generating one or more target vulnerability scores of the target vulnerability based on the one or more target vulnerability vectors (To train the logistic regression model, the regressor linearly combines the probabilities of each probability vector and accepts as input features the linearly combined probabilities and the corresponding classification labels. The regressor can use stochastic gradient descent to estimate the values of the model parameters or coefficients β.sub.0 and β.sub.1. Training continues until a specified accuracy threshold is satisfied. Sharma, para [0043]-[0044]). As per claim 10, Sharma teaches the method of claim 1, wherein when the one or more target vulnerability vectors include at least first and second target vulnerability vectors, the first target vulnerability vector is a vulnerability vector of a first vulnerability scoring version and the second target vulnerability vector is a vulnerability vector of a second vulnerability scoring version different from the first vulnerability scoring version (The vulnerability database 204 contains the open source library data 206a-d for four different open source libraries. The open-source library data 206a comprises a table of version histories for the open source library with associated vulnerability data. The vulnerability data may comprise a number of vulnerabilities for each version of the open-source library and probabilities associated with each identified vulnerability. Sharma, para [0027]) (the generator 116 can create program code that linearly combines output probabilities for a software component from the trained N classifiers to the logistic regression model and program code to fit the linearly combined probabilities to the probability distribution of the logistic regression model. Sharma, para [0020]). As per claim 11, Sharma teaches the method of claim 1, further comprising: determining one or more confidence levels of the one or more target vulnerability vectors; and for each target vulnerability vector, if the confidence level of that target vulnerability vector is below a threshold confidence level, discarding that target vulnerability vector (The open source components (e.g., libraries or projects) can be stored in association with the vulnerability prediction or probability value. When presented for evaluation, the database 120 can present the components identified as vulnerabilities with their associated confidence values for sorting or filtering to allow the experts to evaluate more efficiently. Sharma, para [0021])( A predetermined probability threshold can be used to turn the logistic regression model into a binary classifier. For example, if a probability threshold of p=0.5 is used, vulnerability vectors which give a probability p(x)≥0.5 will return a 1 (indicating a vulnerability) and vulnerability vectors which give a probability p(x)<0.5 will return a 0 (indicating no vulnerability). Sharma, para [0044]). As per claim 12, Sharma teaches the method of claim 1, wherein when one or more vendor-provided vulnerability vectors are received, none of the vendor-provided vulnerability vectors is used to generate the one or more target vulnerability vectors (the generator stores the predictions of the N.sup.th trained classifier for samples in the D.sup.th dataset fold as the N.sup.th component of the S.sup.th probability vector. This iterative training and testing of each classifier for each dataset fold yields S×M predictions which are stored as probability vectors. The stored probabilities can be associated with the corresponding classification labels for the samples within the probability vectors or a separate data structure. Embodiments can associate the classification labels with the probabilities different. For example, these can be associated by sample identifiers or the structures can be ordered consistently to allow for a same index to be used across structures for a same data sample. The probability vectors contain the predictions of the N classifiers for the S samples and are used for training a logistic regression model. Sharma, para [0040]) (A predetermined probability threshold can be used to turn the logistic regression model into a binary classifier. For example, if a probability threshold of p=0.5 is used, vulnerability vectors which give a probability p(x)≥0.5 will return a 1 (indicating a vulnerability) and vulnerability vectors which give a probability p(x)<0.5 will return a 0 (indicating no vulnerability). Sharma, para [0044]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. A. Sloane et al. US 2022/0004643 A1 directed to automated mapping for identifying known vulnerabilities in software products. B. Zenz et al. US 2019/0379677 A1 directed to intrusion detection system. C. Bellis et al. US 10,114,954 B1 directed to exploit prediction based on machine learning. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KHALID M ALMAGHAYREH whose telephone number is (571)272-0179. The examiner can normally be reached Monday - Thursday 8AM-5PM EST & Friday variable. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, RUPAL DHARIA can be reached at (571)272-3880. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Respectfully Submitted /KHALID M ALMAGHAYREH/ Primary Examiner, Art Unit 2492
Read full office action

Prosecution Timeline

Sep 24, 2024
Application Filed
Dec 20, 2025
Non-Final Rejection — §102, §112, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596848
METHOD OF VERIFYING INTEGRITY OF DATA FROM A DEVICE UNDER TEST
2y 5m to grant Granted Apr 07, 2026
Patent 12587840
AUTHENTICATION MANAGEMENT IN A WIRELESS NETWORK ENVIRONMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12587386
CHECKOUT WITH MAC
2y 5m to grant Granted Mar 24, 2026
Patent 12579328
SYSTEM ON A CHIP AND METHOD GUARANTEEING THE FRESHNESS OF THE DATA STORED IN AN EXTERNAL MEMORY
2y 5m to grant Granted Mar 17, 2026
Patent 12572699
Using Memory Protection Data
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+25.2%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 248 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month