DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statements (IDS) filed 10/25/2023 and 1/17/2024 fail to comply with 37 CFR 1.98(a)(2), which requires a legible copy of each cited foreign patent document; each non-patent literature publication or that portion which caused it to be listed; and all other information or that portion which caused it to be listed. It has been placed in the application file, but the information referred to therein has not been considered.
Specifically, copies were not received for the documents listed in the Non-Patent Literature sections.
Response to Arguments
Applicant’s arguments, see pages 8-9, filed 11/12/2025, with respect to objections to claims 2-4 and 13-15 have been fully considered and are persuasive. The objections to these claims have been withdrawn.
Applicant’s arguments, see page 9, filed 11/12/2025, with respect to the objections to the specification have been fully considered and are persuasive. These objections have been withdrawn.
Applicant's arguments, see pages 9-11, filed 11/12/2025, with respect to the rejection of claims 1-20 under 35 USC 101 have been fully considered but they are not persuasive.
Regarding the argument:
“First, under Step 2A of the Alice framework, the amended claims are directed to a specific improvement in computer-related technology, not merely an abstract idea. … The claimed method provides a specific solution to this problem by defining a concrete machine-implemented process that transforms raw, unstructured data into a structured, actionable threat vector. This is not the abstract idea of "analyzing data," but a specific method for improving the function of an extended detection and response (XDR) or similar security system, enabling it to automatically identify and respond to threats in a way it previously could not.”
Examiner respectfully disagrees. Transforming data from a “raw” form into a “structured” form is not a specific improvement to computer technology. Likewise, the claimed methods of classifying and normalizing various already-existing forms of vulnerability information do not represent an improvement to either classifying or normalizing data. See Enfish, LLC v. Microsoft Corp, in which details were provided which demonstrated the specific improvement to storage and retrieval of data combined with a specific data structure. In contrast, the instant application combines known methods of data classification and vectorization of data.
Regarding the argument:
“Second, under Step 2B of the Alice framework, the amended claims recite an inventive concept that amounts to "significantly more" than the alleged abstract idea. … Specifically, claims 1 and 12 now … (a) predicts a security category and a data or schema category, and then (b) determines a threat vector based on those predicted categories.”
Examiner respectfully disagrees. Predicting a security category is also an example of a data-processing step which is not “significantly more” than the abstract idea.
“This two-step process is a specific implementation that is not well-understood, routine, or conventional. As explained in the specification at paragraphs [0058]-[0061], a general-purpose or "generic" ML model would "perform suboptimally" and "generate unhelpful results" when tasked with this complex analysis. The claimed invention overcomes this by imposing a specific, constrained structure on the ML method-requiring it to first generate intermediary classifications before determining the final vector. This is a technical improvement over conventional approaches and directly contradicts the Examiner's assertion that a "generic ML model" is being used. This specific architecture, detailed in paragraph [0022] and Figure 1, is a concrete improvement to computer functionality, enabling the system to produce a more accurate and meaningful threat vector than would otherwise be possible.”
Examiner respectfully disagrees. Providing specific training data to tune a generic or “off-the-shelf” ML model to provide more specific results is a well-understood implementation, and may still performed by an underlying “generic” model. Where the specification teaches that in particular, a “general large language model (LLM)” will perform sub-optimally, it would be understood by one of ordinary skill in the art that providing further training to “constrain” an existing model constitutes an action which would be considered routine and conventional.
Regarding the argument:
“Furthermore, the Examiner's characterization of the process as a "Mental Process" is inapt with respect to the amended claims. The claimed two-step predictive process which involves processing vast quantities of disparate threat intelligence data to first generate probabilistic intermediary classifications and then synthesize those into a final multi-faceted threat vector-is not a process that can be practically performed in the human mind. …”
Examiner respectfully disagrees. The quantity of data being processed is not positively claimed, nor does the specification indicate that the quantity of data is a consideration in the use of the claimed invention. Assuming, arguendo, it were, that a claimed process may be faster than a human does not preclude the process from being considered a mental process. Additionally, the “probabilistic intermediary classifications” and “multi-faceted threat vectors” are not claimed or recited in the specification as data types which could not be processed with the human mind and/or pen and paper. On the contrary, paragraph 0054 recites, “The threat vector prediction system 100 includes a UI 118 that can display the threat vector 116 to a user and can receive User feedback 122 from the user.” It is a non sequitur to argue that a vector which is meant to be human-readable is at the same time too complex to be processed by the human mind.
Applicant’s arguments, see pages 11 and 12, filed 11/12/2025, with respect to the rejection of claims 1-20 under 35 USC 112(a) have been fully considered and are persuasive. This rejection has been withdrawn.
Applicant’s arguments, see pages 12 and 13, filed 11/12/2025, with respect to the rejection of claims 1 and 12 under 35 USC 112(a) have been fully considered and are persuasive. This rejection has been withdrawn.
Examiner notes that applicant arguments contained the following, “In the accompanying Amendment, Applicant has cancelled claims 2-11 and 13-20, including the rejected claims 7 and 18.” There is no indication of cancelled claims as of the writing of this office action. If the indicated claims were intended to be cancelled, applicant will need to take steps to ensure they are given the proper markup so that they are cancelled prior to any future examination.
Applicant’s arguments, see pages 13 and 14, filed 11/12/2025, with respect to the rejection of claims 7 and 18 under 35 USC 102(a)(2) have been fully considered but they are not persuasive.
Examiner notes that due to the change in scope provided in amendments to the claims (i.e. claiming an intermediary step between obtaining threat intelligence and determining the threat vector), the previous prior art rejection under 35 USC 102 is withdrawn. However, in the interest of compact prosecution, answers to applicant arguments which may apply to the new grounds of rejection under LABRECHE (Doc ID US 20230038196 A1) and KIM (Doc ID US 20190156042 A1) are included here.
Regarding the argument:
“The Examiner's reliance on CAM is respectfully misplaced. CAM discloses using CVSS attributes as static, pre-defined input scores to initialize its Hidden Markov Model (HMM). As stated in CAM at paragraph [0058], "initial scores are assigned to each vulnerability... using the Common Vulnerability Scoring System (CVSS)." CAM's system consumes these scores; it does not predict them as an intermediary classification step, as now claimed.
“The amended claims, in contrast, require a preliminary prediction step. CAM discloses no such step. CAM's HMM directly computes a final output (a probable attack path) from its initial inputs. It does not first predict categories and then use those predicted categories to determine the final vector.”
Examiner respectfully disagrees. Where the claims use the term “predict,” it is unclear what functional difference this “prediction” involves over an “assignment.” The invention as claimed uses obtained vulnerability information to “predict” two category types for the vulnerability. A “vector” is then created based on those categories. The claims do not recite any characteristic of the “predicted” categories which distinguishes them from an “assigned” category. When assessed at the computational level, there is indeed no distinguishable difference between a value described as “predicted” and one described as “assigned.” Each are the result of a deterministic process, and the claims lack any details about the “prediction” which would indicate a fundamental difference from the “vulnerability scores” or “attack states” of CAM. Put another way, the current draft of the claims is sufficiently broad that prior art directed to assigning values to a vulnerability and producing data which reflects exploit and impact information is functionally identical to the subject matter positively claimed.
Regarding applicant’s arguments, see pages 13 and 14, filed 11/12/2025, with respect to lined-through references in the submitted IDS’s, examiner notes that the references which were lined-through refer to non-patent literature references of which no copy was provided by the applicant. An incorrect IDS acknowledgement was included in the previous office action which indicated that the IDS’s in question were compliant. A corrected acknowledgement has been included in this office action.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Claims 1 and 12 recite:
“Obtaining threat-intelligence information regarding a software vulnerability”.
Using a generic machine learning (ML) model to “determine a threat vector based on a security category and a data or schema category of the software vulnerability”.
Claims 2 and 13 recite:
Using a generic machine learning (ML) model to output an “intermediary result corresponding to the security category of the software vulnerability”.
Using a generic machine learning (ML) model to output an “intermediary result corresponding to the data or schema category of the software vulnerability”.
Using a generic machine learning (ML) model for “predicting a threat vector for the software vulnerability based on the first intermediary result and the second intermediary result.”
Claims 3 and 14 recite:
Using a generic machine learning (ML) model that “predicts, based on a security taxonomy or ontology, a type of security threat of the software vulnerability.”
Claims 4 and 15 recite:
Using a generic machine learning (ML) model that “predicts, based on a data taxonomy or ontology, a type of data set or schema for the software vulnerability.”
Claims 5 and 16 recite:
Using a generic machine learning (ML) model that “classifies the software vulnerability according to the security category of the software vulnerability and according to the data or schema category of the software vulnerability.”
Claims 7 and 18 recite:
Selecting a first indicia from among “a STRIDE threat category, a common vulnerability scoring system (CVSS) vector, a vulnerability type, the exploitation mechanism, an exploitation entry point, and MITRE ATT&CK framework tactics and techniques.”
These are processes that, under their broadest reasonable interpretation, may be performed in the mind. Accordingly, the claims recite an abstract idea.
This judicial exception is not integrated into a practical application because other aspects of the claims’ limitations amount no more than mere instructions to apply the exception using a generic ML model and generic computer components.
Regarding limitations directed to the use of the ML model, patents may be directed to abstract ideas where they disclose the use of an already available technology, with its already available basic functions, to use as a tool in executing the claimed process.
Regarding limitations directed to obtaining threat-intelligence information and selecting the first indicia: if a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas. Analyzing cybersecurity information is a process which has long been performed manually by human programmers. That the claimed invention may perform this task with greater speed and/or efficiency than a human does not by itself render the claims patent eligible under 35 USC 101.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because there is nothing in the claims, whether considered individually or in their ordered combination, that would transform the application into something “significantly more” than the abstract idea of analyzing cybersecurity threat data. Further, the claims do not contain steps through which the ML model achieves an improvement, nor any improvement over generic ML models themselves. The claims are not patent eligible. See Recentive Analytics, Inc. v. Fox Corp. , No. 23-2437 (Fed. Cir. 2025) (“patents that do no more than claim the application of generic machine learning to new data environments, without disclosing improvements to the machine learning models to be applied, are patent ineligible under § 101”).
Regarding claims 6, 8-11, 17, 19, and 20:
They are dependent on one or more rejected claims, and thus inherit those rejections. This rejection could be overcome by overcoming the rejection(s) to any claims upon which these claims depend, or by amending the claims such that they are no longer dependent on any rejected claim.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 5, 12-14, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over LABRECHE (Doc ID US 20230038196 A1), and further in view of KIM (Doc ID US 20190156042 A1).
Regarding claim 1:
LABRECHE teaches:
A method for predicting an exploitation mechanism of a software vulnerability and/or for predicting an impact thereof, the method comprising: obtaining threat-intelligence information regarding a software vulnerability ([0064] "... the attack class predictor 220 ... configured to allow users to send data, in particular new vulnerabilities, to the attack type identifier module 210."); and
applying the threat-intelligence information to a machine learning (ML) method to predict a security category and a data or schema category for the software vulnerability ([0044] "... the attack class predictor 220 includes instructions to receive a new vulnerability ... and generates a probability value indicating a probability of each of a series of attack types occurring or being brought against the new vulnerability as an output therefrom." & [0065]); and
KIM teaches the following limitations not taught by LABRECHE:
determining a threat vector based on the predicted security category and the predicted data or schema category of the software vulnerability ([0077] "Next, in operation S650, the vulnerability information sharing apparatus 100 may convert the vulnerability information into additional vulnerability information. ... Here, the vulnerability information sharing apparatus 100 may generate an information sharing object that includes the predetermined information sharing items and the additional items for the vulnerability information ..."),
wherein the threat vector comprises first indicia that represents an exploitation mechanism of the software vulnerability, and the threat vector comprises second indicia of an impact of the exploitation mechanism of the software vulnerability ([0085] "... the vulnerability information providing system 10 may ... add the ... exploitability and impact items of the information sharing object. Here, the ... exploitability and impact items may be included in the additional items defined additionally in operation S610.").
Determining categorical aspects of a vulnerability using a machine learning (ML) model is a known technique in the art, as demonstrated by LABRECHE. Further, representing exploitability and impact of a vulnerability with data derived from categorical vulnerability data is a known technique in the art, as demonstrated by KIM. It would have been obvious to a person having ordinary skill in the art (PHOSITA) before the effective filing date of the claimed invention to modify the vulnerability classification of LABRECHE with the data object of KIM with the motivation to create a consolidated data object describing a vulnerability in a standardized way.
Regarding claim 2:
The combination of LABRECHE and KIM teaches:
The method of claim 1, wherein applying the threat-intelligence information to the ML method further comprises that the ML method including a first portion constrained to predict a first intermediary result corresponding to the security category of the software vulnerability (LABRECHE [0044] "... the attack class predictor 220 includes instructions to receive a new vulnerability ... and generates a probability value indicating a probability of each of a series of attack types occurring or being brought against the new vulnerability as an output therefrom.") and
including a second portion constrained to predict a second intermediary result corresponding to the data or schema category of the software vulnerability (LABRECHE [0044] "... the attack class predictor 220 includes instructions to receive a new vulnerability ... and generates a probability value indicating a probability of each of a series of attack types occurring or being brought against the new vulnerability as an output therefrom."), and
the ML method predicting a threat vector for the software vulnerability based on the first intermediary result and the second intermediary result (KIM [0077] "Next, in operation S650, the vulnerability information sharing apparatus 100 may convert the vulnerability information into additional vulnerability information. ... Here, the vulnerability information sharing apparatus 100 may generate an information sharing object that includes the predetermined information sharing items and the additional items for the vulnerability information ...").
Representing exploitability and impact of a vulnerability with data derived from categorical vulnerability data is a known technique in the art, as demonstrated by KIM. It would have been obvious to a PHOSITA before the effective filing date of the claimed invention to modify the vulnerability classification and data object of LABRECHE and KIM with the data object of KIM with the motivation to create a consolidated data object describing a vulnerability in a standardized way.
Regarding claim 3:
The combination of LABRECHE and KIM teaches:
The method of claim 2, wherein applying the threat-intelligence information to the ML method further comprises that the first portion of the ML method comprises a first transformer neural network that predicts, based on a security taxonomy or ontology, a type of security threat of the software vulnerability (LABRECHE [0016] "… System supervised machine learning modules or engines, … where each model is trained using features associated with known vulnerabilities to predict one specific type of attack, are utilized.").
Regarding claim 5:
The combination of LABRECHE and KIM teaches:
The method of claim 1, wherein applying the threat-intelligence information to the ML method further comprises applying the threat-intelligence information to a classifier that classifies the software vulnerability according to the security category of the software vulnerability and according to the data or schema category of the software vulnerability (LABRECHE [0044] "... the attack class predictor 220 includes instructions to receive a new vulnerability ... and generates a probability value indicating a probability of each of a series of attack types occurring or being brought against the new vulnerability as an output therefrom.").
Regarding claim 12:
KIM teaches:
A computing apparatus comprising: a processor; and a memory storing instructions that, when executed by the processor, configure the apparatus to ([0022] "… a vulnerability information sharing apparatus comprising: a processor; a storage device which stores a program; and a memory which stores a plurality of operations to be executed by the processor …"):
The remainder of this claim’s limitations are mapped and rejected with the same justification, mutatis mutandis, as its counterpart claim 1.
Regarding claims 13, 14, and 16:
These claims are rejected with the same justification, mutatis mutandis, as their counterpart claims 2, 3, and 5 above.
Claims 4 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over LABRECHE (Doc ID US 20230038196 A1) and KIM (Doc ID US 20190156042 A1) as applied to claims 3 and 14 above, and further in view of LIN (Doc ID US 20240411898 A1).
Regarding claim 4:
The combination of LABRECHE and KIM teaches:
The method of claim 3,
LIN teaches the following limitations not taught by the combination of LABRECHE and KIM:
wherein applying the threat-intelligence information to the ML method further comprises that the second portion of the ML method comprises a second transformer neural network that predicts, based on a data taxonomy or ontology, a type of data set or schema for the software vulnerability ([0058] "At the third stage, initial scores are assigned to each vulnerability in block 103 using the Common Vulnerability Scoring System (CVSS) … Each of the vulnerabilities is assigned scores of … access complexity, authentication, … remediation level …").
Using ML to generate vulnerability data based on data ontology is a known technique in the art, as demonstrated by LIN. It would have been obvious to a PHOSITA before the effective filing date of the claimed invention to modify the vulnerability classification and data object of LABRECHE and KIM with the ML model of LIN with the motivation to generate vulnerability from various forms of data related to the vulnerability.
Regarding claim 15:
This claim is rejected with the same justification, mutatis mutandis, as its counterpart claim 4 above.
Claims 6 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over LABRECHE (Doc ID US 20230038196 A1) and KIM (Doc ID US 20190156042 A1) as applied to claims 1 and 12 above, and further in view of MULLANEY (Doc ID US 20220286475 A1).
Regarding claim 6:
The combination of LABRECHE and KIM teaches:
The method of claim 1,
MULLANEY teaches the following limitations not taught by the combination of LABRECHE and KIM:
wherein the ML method has been trained using labeled training data that includes training threat-intelligence information that is labeled according to threat vectors, security categories, and data or schema categories ([0095] "… a CVSS vector may be an example of the training vulnerability vector. Thus, each training vulnerability vector of at least one training vulnerability data may comprise ...: [0096] attack vector (AV) metric … [0098] a privileges required (PR) metric ...").
Training a machine learning model with labelled vulnerability data is a known technique in the art, as demonstrated by MULLANEY. It would have been obvious to a PHOSITA before the effective filing date of the claimed invention to modify the vulnerability classification and data object of LABRECHE and KIM with the labelled training data of MULLANEY with the motivation to provide supervised learning to the model. It is obvious to label training data so that the model is able to produce consistent results based on the desired outcome.
Regarding claim 17:
This claim is rejected with the same justification, mutatis mutandis, as its counterpart claim 6 above.
Claims 7 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over LABRECHE (Doc ID US 20230038196 A1) and KIM (Doc ID US 20190156042 A1) as applied to claims 1 and 12 above, and further in view of MOHANTY (Doc ID US 9692778 B1).
Regarding claim 7:
The combination of LABRECHE and KIM teaches:
The method of claim 1,
MOHANTY teaches the following limitations not taught by the combination of LABRECHE and KIM:
The method of claim 1, wherein the threat vector comprises the first indicia comprises a predicted classification corresponding to one or more items from the group consisting of: a STRIDE threat category, a common vulnerability scoring system (CVSS) vector, a vulnerability type, the exploitation mechanism, an exploitation entry point, and MITRE ATT&CK framework tactics and techniques (Col 3 lines 43-58 "The system and method employ an algorithm that correlates vulnerabilities with contextual information such as threat data …. The algorithm works on a three dimensional ... model ...: Dimension#1 ... Related data could include base/temporal CVSS score, ... severity, etc. Dimension#2 ... Related data could include threat impact, impacted CVE ID, type of threat ...". Examiner notes that the remainder of the examples given in the claim are obvious to try, and represent a selection of alternatives among a finite number of identified, predictable solutions.).
Describing a threat using systems such as STRIDE and MITRE ATT&CK is a well-known technique in the art, as demonstrated by MOHANTY. It would have been obvious to a PHOSITA before the effective filing date of the claimed invention to modify the vulnerability classification and data object of LABRECHE and KIM with the threat categorization of MOHANTY with the motivation to utilize well-known standards in threat categorization such as STRIDE, CVSS, and MITRE ATT&CK. It is obvious to use industry standards so that the threat information is expressed in a format understandable to systems designed for working with software vulnerabilities.
Regarding claim 18:
This claim is rejected with the same justification, mutatis mutandis, as its counterpart claim 7 above.
Claims 8, 9, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over LABRECHE (Doc ID US 20230038196 A1) and KIM (Doc ID US 20190156042 A1) as applied to claims 1 and 12 above, and further in view of ALEIDAN (Doc ID US 20210211450 A1).
Regarding claim 8:
The combination of LABRECHE and KIM teaches:
The method of claim 1,
ALEIDAN teaches the following limitations not taught by the combination of LABRECHE and KIM:
further comprising: providing the threat vector to a remediation processor ([0108] "... the vulnerability remediator 185 can access the database 140 ..., and prioritize the vulnerabilities …"); and
performing, by the remediation processor, a remediating action based on the threat vector ([0111] "… the remediation actions can be implemented to resolve the corresponding vulnerabilities …").
Providing remediation for an identified software vulnerability is a known technique in the art, as demonstrated by ALEIDAN. It would have been obvious to a PHOSITA before the effective filing date of the claimed invention to modify the vulnerability classification and data object of LABRECHE and KIM with the vulnerability remediation of ALEIDAN with the motivation to provide the user with a means to mitigate the threat posed by a vulnerability. It is obvious to remediate vulnerabilities, where possible, with minimal input from the user.
Regarding claim 9:
The combination of LABRECHE, KIM, and ALEIDAN teaches:
The method of claim 8, wherein the remediating action is selected from the group consisting of quarantining a computer implementable instruction corresponding to the software vulnerability, installing a software patch, updating and/or upgrading software corresponding to the software vulnerability, defending privileges and/or accounts, enforcing signed software execution policies, exercising a recovery plan, managing systems and/or configurations, searching or scanning for network intrusions, engaging hardware security features, increasing segregation of networks and processors, and transitioning to multi-factor authentication (ALEIDAN [0094] "… The vulnerability remediator 185 ... can be arranged to mitigate the vulnerabilities by ... implementing patches or fixes ..., disconnecting the computer resource assets from the computer network 10 ..., breaking connectivity links to the computer resource assets, terminating access ..., modifying firewall policies or rules, or modifying router policies or rules." Examiner notes that the remainder of the examples given in the claim are obvious to try, and represent a selection of alternatives among a finite number of identified, predictable solutions.).
Utilizing vulnerability remediation methods such as patches, hardware security, and configuration management are well-known techniques in the art, as demonstrated by ALEIDAN. It would have been obvious to a PHOSITA before the effective filing date of the claimed invention to modify the vulnerability classification and data object of LABRECHE, KIM, and ALEIDAN with the remediation strategies of ALEIDAN with the motivation to provide the user with a means to mitigate the threat posed by a vulnerability. It is obvious to use proven and well-known remediation strategies for cases in which they are appropriate.
Regarding claims 19 and 20:
These claims are rejected with the same justification, mutatis mutandis, as their counterpart claims 8 and 9 above.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over LABRECHE (Doc ID US 20230038196 A1) and KIM (Doc ID US 20190156042 A1) as applied to claim 1 above, and further in view of PAL et al (Doc ID US 20230062725 A1).
Regarding claim 10:
The combination of LABRECHE and KIM teaches:
The method of claim 1,
PAL teaches the following limitations not taught by the combination of LABRECHE and KIM:
further comprising: signaling the threat vector to a user ([0050] "... requesting user feedback on power-up to perform reinforced learning under critical situation to avoid nuisance tripping.");
receiving user feedback regarding values of the threat vector ([0051] "Step 511 includes checking whether user feedback is available or not;"); and
performing reinforcement learning based on the received user feedback to update the ML method ([0051] "… step 512, perform reinforced learning based on user feedback;").
Addressing the user, receiving feedback on output from a machine learning model, and using reinforcement learning to incorporate the feedback into the model are known techniques in the art, as demonstrated by PAL. It would have been obvious to a PHOSITA before the effective filing date of the claimed invention to modify the vulnerability classification and data object of LABRECHE and KIM with the reinforcement learning of PAL with the motivation to maintain an up-to-date model producing results with as much accuracy as possible. It is obvious to use feedback and reinforcement learning in an environment where the security landscape changes at a rapid pace.
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over LABRECHE (Doc ID US 20230038196 A1), KIM (Doc ID US 20190156042 A1), and PAL et al (Doc ID US 20230062725 A1) as applied to claim 10 above, and further in view of MELNIKOV et al (Doc ID US 20210182370 A1).
Regarding claim 11:
The combination of LABRECHE, KIM, and PAL teaches:
The method of claim 10,
MELNIKOV teaches the following limitations not taught by the combination of LABRECHE, KIM, and PAL:
further comprising: prior to receiving the user feedback, verifying, based on login credentials of the user, that the user is authorized to provide the user feedback ([0038] "Based on the authorize user's feedback, continuous authentication module 106 may retrain machine learning module 115, adjust the weights assigned to the respective usage attributes, … (if the current user is the authorized user).").
Ensuring a user is authorized before accepting their feedback for use in training a machine learning model is a known technique in the art, as demonstrated by MELNIKOV. It would have been obvious to a PHOSITA before the effective filing date of the claimed invention to modify the vulnerability classification and data object of LABRECHE, KIM, and PAL with the user authorization of MELNIKOV with the motivation to ensure that only authorized users are able to provide feedback, and by extension changes, to the model. It is obvious to do this by checking a user’s authorization prior to accepting feedback from the user.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
KIM et al (Doc ID US 20230048076 A1) teaches a similar method of determining categories and outputting transformed data. However, it is performed on a file believed to be malicious and not on a vulnerability.
UEDA et al (Doc ID US 20230024824 A1) teaches a similar method of categorizing software vulnerabilities and vectorizing the results. However, it is performed on virtual vulnerabilities in order to determine whether they exist on a live system, and not on a novel vulnerability input into the system.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRANDON BINCZAK whose telephone number is (703)756-4528. The examiner can normally be reached M-F 0800-1700.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexander Lagor can be reached on (571) 270-5143. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BB/Examiner, Art Unit 2437
/BENJAMIN E LANIER/Primary Examiner, Art Unit 2437