Prosecution Insights
Last updated: April 19, 2026
Application No. 17/661,960

Predictive Severity Matrix

Non-Final OA §101§103
Filed
May 04, 2022
Examiner
SPRAUL III, VINCENT ANTON
Art Unit
2129
Tech Center
2100 — Computer Architecture & Software
Assignee
Capital One Services LLC
OA Round
3 (Non-Final)
59%
Grant Probability
Moderate
3-4
OA Rounds
4y 6m
To Grant
94%
With Interview

Examiner Intelligence

Grants 59% of resolved cases
59%
Career Allow Rate
20 granted / 34 resolved
+3.8% vs TC avg
Strong +35% interview lift
Without
With
+34.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
30 currently pending
Career history
64
Total Applications
across all art units

Statute-Specific Performance

§101
22.6%
-17.4% vs TC avg
§103
48.4%
+8.4% vs TC avg
§102
9.1%
-30.9% vs TC avg
§112
14.4%
-25.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 34 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/19/2025 has been entered. Response to Arguments Applicant’s arguments from 11/19/2025 have been fully considered. Regarding the rejection of claims as judicial exceptions to 35 U.S.C. 101, Applicant submits that claim 1 is not directed to a mental process at Step 2A prong 1, because the claim includes elements that are not mental processes. Examiner respectfully disagrees. At Step 2A prong 1, the question is whether a claim recites any abstract ideas, not whether all elements are abstract ideas. As stated below, claim 1 recites “predicting, via […] recogniz[… ing] one or more relationships between the refinement data and new metric data representative of a new development operations tools metric data of the assets, a plurality of severity designations to assign to occurrence of an incident associated with the new metric data for a new entry to add to the severity matrix data, wherein the new entry comprises data indicating: an incident type; the plurality of severity designations; and an issue level associated with each of the plurality of severity designations, wherein the issue level indicates a threshold amount affected by the incident type in relation to the new metric data, each issue level comprising a different threshold amount,” which can be performed as a mental process. Therefore the claim is not found eligible at Step 2A prong 1, and consideration must continue to step 2A prong 2. Applicant further submits that claim 1 is eligible at Step 2A prong 2, because the claim’s recitation of “predicting, via a second machine learning model trained to recognize one or more relationships between the refinement data and new metric data representative of a new development operations tools metric data of the assets, a plurality of severity designations to assign to occurrence of an incident associated with the new metric data for a new entry to add to the severity matrix data” is an improvement the functioning of computers, namely improving “the ability of computing devices to identify and predict severity designations as part of a new entry to an existing severity matrix.” Examiner respectfully disagrees. The portion of the claim quoted by the Applicant is the mental process found by the Examiner. The abstract idea cannot provide the improvement itself (“It is important to note, the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements” (MPEP 2106.05(a)).) The addition to the mental process in this portion of the claim, “via a second machine learning model trained to […] ,” merely describes the performance of the abstract idea by a computer and therefore does not provide specific steps that would be recognized as an improvement. Applicant further submits that claim 1 is eligible at Step 2B, because “the claim describes an inventive concept that predicts a plurality of severity designations for a given occurrence of an incident associated with the new metric data.” Examiner respectfully disagrees. As stated in MPEP 2106.05 (I), “[a]n inventive concept ‘cannot be furnished by the unpatentable law of nature (or natural phenomenon or abstract idea) itself.’ […] Instead, an ‘inventive concept’ is furnished by an element or combination of elements that is recited in the claim in addition to (beyond) the judicial exception, and is sufficient to ensure that the claim as a whole amounts to significantly more than the judicial exception itself.” The further elements in claim 1 describe the performance of the identified mental process using a computer, and the updating of the severity matrix and model. Merely updating the severity matrix and model with the new data generated by the mental process does not provide an inventive concept under Step 2B. The arguments are therefore found unpersuasive. Regarding the rejection of claims under 35 U.S.C. 103, Applicant’s arguments are directed towards amended portions of the claims that have not been previously examined. New grounds of rejection under 35 U.S.C. 103 are given below. Claim Objections Claim 1 objected to because of the following informality. In the phrase “a plurality of severity designations to assign to occurrence of an incident associated with the new metric data for a new entry to add to the severity matrix data,” Examiner considers the phrase “to occurrence” to be non-standard English and suggests that “to an occurrence” or “to occurrences” was intended. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-2, 4-8, 10-15, and 17-23 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Analysis is provided for the claims under the guidelines of MPEP 2106. Regarding claim 1: Step 1: The claim recites “A method comprising” steps that follow. Thus the claim is to a process, which is a statutory category of invention. Step 2A prong 1: The limitation (bold only) “predicting, via a second machine learning model trained to recognize one or more relationships between the refinement data and new metric data representative of a new development operations tools metric data of the assets, a plurality of severity designations to assign to occurrence of an incident associated with the new metric data for a new entry to add to the severity matrix data, wherein the new entry comprises data indicating: an incident type; the plurality of severity designations; and an issue level associated with each of the plurality of severity designations, wherein the issue level indicates a threshold amount affected by the incident type in relation to the new metric data, each issue level comprising a different threshold amount” in its broadest reasonable interpretation, recites a mental process. Given a compilation of input data, consisting of ownership data, matric data, and severity matrix data, and refinement data that updates the input data, a person could recognize relationships between the refinement data and new metric data. And further, the person could determine severity designations for application to new incidents, where the designations include issue levels categorized by threshold amounts, with different thresholds for different issue levels. A person could perform this process using judgment and opinion. Thus, the claim recites an abstract idea. Step 2A prong 2: The further element “compiling, by a first computing device, ownership data, metric data, and severity matrix data as input data to a machine learning model data store” recites mere data gathering, which is insignificant extra-solution activity (MPEP 2106.05(g)). The further elements “wherein the ownership data comprises data representative of assets of an entity and data representative of relationships between the assets, wherein the metric data comprises data representative of development operations tools metric data of the assets, and wherein the severity matrix data comprises a plurality of entries, wherein each entry comprises data representative of a severity of a consequence of a particular incident occurrence affecting the metric data” recite the source or type of data gathered. These elements merely link the judicial exception to a particular field of use, which does not integrate the exception into a practical application. The further elements “receiving, from a second computing device via a first machine learning model trained to recognize one or more relationships between the input data in the machine learning model data store, refinement data” describes model training at a high level of generality. No particular model or method of training is described. The elements merely recite the use of a computer as a tool to perform the abstract idea, and are equivalent to adding the words “apply it” or the equivalent to the judicial exception (MPEP 2106.05(f)). The further element “wherein the refinement data updates the input data in the machine learning model data store” recites model updating at a high level of generality. No particular model, method of training, or method of updating is described. The elements merely recite the use of a computer as a tool to perform the abstract idea, and are equivalent to adding the words “apply it” or the equivalent to the judicial exception (MPEP 2106.05(f)). The limitation (bold only) “predicting, via a second machine learning model trained to recognize one or more relationships between the refinement data and new metric data representative of a new development operations tools metric data of the assets, a plurality of severity designations to assign to occurrence of an incident associated with the new metric data for a new entry to add to the severity matrix data” recites model training at a high level of generality. No particular model or method of training is described. The elements merely recite the use of a computer as a tool to perform the abstract idea, and are equivalent to adding the words “apply it” or the equivalent to the judicial exception (MPEP 2106.05(f)). The further element “modifying, based on user input associated with the predicted new entry, the severity matrix data to include the predicted new entry; and causing, based on the user input associated with the predicted new entry, at least one modification to the second machine learning model” recites data updating and model retraining at a high level of generality. No particular method of updating or retraining is described. The association between the user input and the predicted new entry is not defined and reasonably includes a mere user confirmation of the data update. Therefore the elements merely recite the use of a computer as a tool to perform the abstract idea, and are equivalent to adding the words “apply it” or the equivalent to the judicial exception (MPEP 2106.05(f)). Thus, the additional elements merely connect the abstract idea to a field of use, recite the use of a computer as a tool to perform the abstract idea or recite insignificant extra-solution activity. Taken alone, the additional elements do not integrate the abstract idea into a practical application. Considering the elements together as an ordered combination adds nothing that is not present from examining the elements individually. The elements, individually or together, do not describe an improvement in the functioning of technology. Step 2B: The claim as a whole does not amount to significantly more than the recited judicial exception. The element “compiling, by a first computing device, ownership data, metric data, and severity matrix data as input data to a machine learning model data store” recites mere data gathering, which is recognized as well-understood, routine, and conventional activity in the art (see MPEP § 2106.05(d)(II)(i)). The further elements “wherein the ownership data comprises data representative of assets of an entity and data representative of relationships between the assets, wherein the metric data comprises data representative of development operations tools metric data of the assets, and wherein the severity matrix data comprises a plurality of entries, wherein each entry comprises data representative of a severity of a consequence of a particular incident occurrence affecting the metric data” merely link the judicial exception to a particular field of use. These additional claim elements merely recite the use of a computer as a tool to perform the abstract idea: “receiving, from a second computing device via a first machine learning model trained to recognize one or more relationships between the input data in the machine learning model data store, refinement data, wherein the refinement data updates the input data in the machine learning model data store” (bold only) “predicting, via a second machine learning model trained to recognize one or more relationships between the refinement data and new metric data representative of a new development operations tools metric data of the assets, a plurality of severity designations to assign to occurrence of an incident associated with the new metric data for a new entry to add to the severity matrix data” “modifying, based on user input associated with the predicted new entry, the severity matrix data to include the predicted new entry; and causing, based on the user input associated with the predicted new entry, at least one modification to the second machine learning model” Even when considered in combination, the additional elements connect the abstract idea to a field of use, represent mere instructions to apply the abstract idea to a computer or represent insignificant extra-solution activity, which do not provide an inventive concept. The claim is not eligible under 35 U.S.C. 101. Regarding claim 2: For step 2A prong 1, claim 2 further limits claim 1 and the same elements in claim 2 still recite an abstract idea. For step 2A prong 2, the further element “wherein the user input indicates a confirmation of adding the new entry to the severity matrix data” recites mere data gathering, which is insignificant extra-solution activity (MPEP 2106.05(g)). For step 2B, the claim as a whole does not amount to significantly more than the recited judicial exception. The element “wherein the user input indicates a confirmation of adding the new entry to the severity matrix data” recites mere data gathering, which is recognized as well-understood, routine, and conventional activity in the art (see MPEP § 2106.05(d)(II)(i)). Even when considered in combination, the additional elements connect the abstract idea to a field of use, represent mere instructions to apply the abstract idea to a computer or represent insignificant extra-solution activity, which do not provide an inventive concept. The claim is not eligible under 35 U.S.C. 101. Regarding claim 4: For step 2A prong 1, claim 4 further limits claim 1 and the same elements in claim 4 still recite an abstract idea. The further element “wherein the predicting the new entry comprises identifying one or more specific characteristics of entries within the severity matrix data and the new metric data” further limits the abstract idea of claim 1 but it remains an abstract idea; in performing the predicting using judgement and opinion, a person could identify specific characteristics of entries. For step 2A prong 2, and step 2B, no further elements remain to be considered. The claim as a whole does not amount to significantly more than the recited judicial exception and is ineligible under 35 U.S.C. 101. Regarding claim 5: For step 2A prong 1, claim 5 further limits claim 4 and the same elements in claim 5 still recite an abstract idea. The further element “wherein the one or more specific characteristics include one or more of cloud infrastructure, physical infrastructure, a recovery time objective, or a customer base” further limits the abstract idea of claim 1 but it remains an abstract idea; in performing the predicting using judgement and opinion, a person could identify one or more of the specific characteristics listed in the claim. For step 2A prong 2, and step 2B, no further elements remain to be considered. The claim as a whole does not amount to significantly more than the recited judicial exception and is ineligible under 35 U.S.C. 101. Regarding claim 6: For step 2A prong 1, claim 6 further limits claim 1 and the same elements in claim 6 still recite an abstract idea. For step 2A prong 2, the further element “wherein the first and second computing devices are the same computing device” merely allows the two devices to be the same. This does not alter that the use of computing devices in the claim merely recites the use of a computer as a tool to perform the abstract idea, and is equivalent to adding the words “apply it” or the equivalent to the judicial exception (MPEP 2106.05(f)). For step 2B, the claim as a whole does not amount to significantly more than the recited judicial exception. The limitation “wherein the first and second computing devices are the same computing device” merely recites the use of a computer as a tool to perform the abstract idea. Even when considered in combination, the additional elements connect the abstract idea to a field of use, represent mere instructions to apply the abstract idea to a computer or represent insignificant extra-solution activity, which do not provide an inventive concept. The claim is not eligible under 35 U.S.C. 101. Regarding claim 7: For step 2A prong 1, claim 7 further limits claim 1 and the same elements in claim 7 still recite an abstract idea. For step 2A prong 2, the further element “wherein the user input indicates a modification to the new entry to the severity matrix data” recites mere data gathering, which is insignificant extra-solution activity (MPEP 2106.05(g)). For step 2B, the claim as a whole does not amount to significantly more than the recited judicial exception. The element “wherein the user input indicates a modification to the new entry to the severity matrix data” recites mere data gathering, which is recognized as well-understood, routine, and conventional activity in the art (see MPEP § 2106.05(d)(II)(i)). Even when considered in combination, the additional elements connect the abstract idea to a field of use, represent mere instructions to apply the abstract idea to a computer or represent insignificant extra-solution activity, which do not provide an inventive concept. The claim is not eligible under 35 U.S.C. 101. Regarding claim 8: For step 2A prong 1, claim 8 further limits claim 7 and the same elements in claim 8 still recite an abstract idea. For step 2A prong 2, the further element “wherein modifying the severity matrix data includes the modified new entry” recites updating a data table at a high level of generality. The element merely recites the use of a computer as a tool to perform the abstract idea, and is equivalent to adding the words “apply it” or the equivalent to the judicial exception (MPEP 2106.05(f)). For step 2B, the claim as a whole does not amount to significantly more than the recited judicial exception. The limitation “wherein modifying the severity matrix data includes the modified new entry” merely recites the use of a computer as a tool to perform the abstract idea. Even when considered in combination, the additional elements connect the abstract idea to a field of use, represent mere instructions to apply the abstract idea to a computer or represent insignificant extra-solution activity, which do not provide an inventive concept. The claim is not eligible under 35 U.S.C. 101. Regarding claim 10: For step 2A prong 1, claim 10 further limits claim 1 and the same elements in claim 10 still recite an abstract idea. For step 2A prong 2, the further element “receiving, by the first computing device, the ownership data” recites mere data gathering, which is insignificant extra-solution activity (MPEP 2106.05(g)). For step 2B, the claim as a whole does not amount to significantly more than the recited judicial exception. The element “receiving, by the first computing device, the ownership data” recites mere data gathering, which is recognized as well-understood, routine, and conventional activity in the art (see MPEP § 2106.05(d)(II)(i)). Even when considered in combination, the additional elements connect the abstract idea to a field of use, represent mere instructions to apply the abstract idea to a computer or represent insignificant extra-solution activity, which do not provide an inventive concept. The claim is not eligible under 35 U.S.C. 101. Regarding claim 11: For step 2A prong 1, claim 11 further limits claim 1 and the same elements in claim 11 still recite an abstract idea. For step 2A prong 2, the further element “receiving, by the first computing device, the metric data” recites mere data gathering, which is insignificant extra-solution activity (MPEP 2106.05(g)). For step 2B, the claim as a whole does not amount to significantly more than the recited judicial exception. The element “receiving, by the first computing device, the metric data” recites mere data gathering, which is recognized as well-understood, routine, and conventional activity in the art (see MPEP § 2106.05(d)(II)(i)). Even when considered in combination, the additional elements connect the abstract idea to a field of use, represent mere instructions to apply the abstract idea to a computer or represent insignificant extra-solution activity, which do not provide an inventive concept. The claim is not eligible under 35 U.S.C. 101. Regarding claim 12: For step 2A prong 1, claim 12 further limits claim 1 and the same elements in claim 12 still recite an abstract idea. For step 2A prong 2, the further element “receiving, by the first computing device, the severity matrix data” recites mere data gathering, which is insignificant extra-solution activity (MPEP 2106.05(g)). For step 2B, the claim as a whole does not amount to significantly more than the recited judicial exception. The element “receiving, by the first computing device, the severity matrix data” recites mere data gathering, which is recognized as well-understood, routine, and conventional activity in the art (see MPEP § 2106.05(d)(II)(i)). Even when considered in combination, the additional elements connect the abstract idea to a field of use, represent mere instructions to apply the abstract idea to a computer or represent insignificant extra-solution activity, which do not provide an inventive concept. The claim is not eligible under 35 U.S.C. 101. Regarding claim 13: For step 2A prong 1, claim 13 further limits claim 1 and the same elements in claim 13 still recite an abstract idea. For step 2A prong 2, the further element “receiving, by the second computing device, the new metric data” recites mere data gathering, which is insignificant extra-solution activity (MPEP 2106.05(g)). For step 2B, the claim as a whole does not amount to significantly more than the recited judicial exception. The element “receiving, by the second computing device, the new metric data” recites mere data gathering, which is recognized as well-understood, routine, and conventional activity in the art (see MPEP § 2106.05(d)(II)(i)). Even when considered in combination, the additional elements connect the abstract idea to a field of use, represent mere instructions to apply the abstract idea to a computer or represent insignificant extra-solution activity, which do not provide an inventive concept. The claim is not eligible under 35 U.S.C. 101. Regarding claim 14: Step 1: The claim recites “A method comprising” steps that follow. Thus the claim is to a process, which is a statutory category of invention. Step 2A prong 1: The limitation “identifying an entry of the development operations tools metric data for input to a second machine learning training model trained to recognize one or more relationships between the input data in the machine learning model data store and the identified entry” in its broadest reasonable interpretation, recites a mental process. For example, a person could review metric data and identify an entry based on its perceived value in improving a model, using judgement. The limitation (bold only) “predicting, via the second machine learning model, a modification to a plurality of severity designations associated with an incident type associated with the identified entry, the modification comprising data indicating: the incident type associated with the identified entry; the plurality of severity designations; an issue level associated with each of the plurality of severity designations, wherein the issue level indicates a threshold amount affected by the incident type in relation to the development operations tools metric data, each issue level comprising a different threshold amount,” in its broadest reasonable interpretation, recites a mental process. A person could generate a modification to the severity designations, including an incident type and severity designations, where the designations include and issue level, and the different issue levels are each associated with a different threshold amount. A person could perform this process using judgment and opinion. Thus, the claim recites an abstract idea. Step 2A prong 2: The further element “compiling, by a first computing device, ownership data, metric data, and severity matrix data as input data to a machine learning model data store” recites mere data gathering, which is insignificant extra-solution activity (MPEP 2106.05(g)). The further elements “wherein the ownership data comprises data representative of assets of an entity and data representative of relationships between the assets, wherein the metric data comprises data representative of development operations tools metric data of the assets, and wherein the severity matrix data comprises a plurality of entries, wherein each entry comprises data representative of a severity of a consequence of a particular incident occurrence affecting the metric data” recite the source or type of data gathered. This merely links the judicial exception to a particular field of use, which does not integrate the exception into a practical application. The limitation (bold only) “predicting, via the second machine learning model, a modification to a plurality of severity designations associated with an incident type associated with the identified entry” recites model training at a high level of generality. No particular model or method of training is described. The elements merely recite the use of a computer as a tool to perform the abstract idea, and are equivalent to adding the words “apply it” or the equivalent to the judicial exception (MPEP 2106.05(f)). The further element “and modifying, based on user input associated with the predicted modification to the identified entry, the severity matrix data to include the predicted modification to the identified entry; and causing, based on the user input associated with the predicted modification to the identified entry, at least one modification to the second machine learning model” recites data updating and model retraining at a high level of generality. No particular method of updating or model retraining is described. The association between the user input and the predicted modification is not defined and reasonably includes a mere user confirmation of the data update. Therefore the elements merely recite the use of a computer as a tool to perform the abstract idea, and are equivalent to adding the words “apply it” or the equivalent to the judicial exception (MPEP 2106.05(f)). Thus, the additional elements merely connect the abstract idea to a field of use, recite the use of a computer as a tool to perform the abstract idea or recite insignificant extra-solution activity. Taken alone, the additional elements do not integrate the abstract idea into a practical application. Considering the elements together as an ordered combination adds nothing that is not present from examining the elements individually. The elements, individually or together, do not describe an improvement in the functioning of technology. Step 2B: The claim as a whole does not amount to significantly more than the recited judicial exception. The element “compiling, by a first computing device, ownership data, metric data, and severity matrix data as input data to a machine learning model data store” recites mere data gathering, which is recognized as well-understood, routine, and conventional activity in the art (see MPEP § 2106.05(d)(II)(i)). The further elements “wherein the ownership data comprises data representative of assets of an entity and data representative of relationships between the assets, wherein the metric data comprises data representative of development operations tools metric data of the assets, and wherein the severity matrix data comprises a plurality of entries, wherein each entry comprises data representative of a severity of a consequence of a particular incident occurrence affecting the metric data” merely link the judicial exception to a particular field of use. These additional claim elements merely recite the use of a computer as a tool to perform the abstract idea: (bold only) “predicting, via the second machine learning model, a modification to a plurality of severity designations associated with an incident type associated with the identified entry” “and modifying, based on user input associated with the predicted modification to the identified entry, the severity matrix data to include the predicted modification to the identified entry; and causing, based on the user input associated with the predicted modification to the identified entry, at least one modification to the second machine learning model” Even when considered in combination, the additional elements connect the abstract idea to a field of use, represent mere instructions to apply the abstract idea to a computer or represent insignificant extra-solution activity, which do not provide an inventive concept. The claim is not eligible under 35 U.S.C. 101. Regarding claim 15: For step 2A prong 1, claim 15 further limits claim 14 and the same elements in claim 15 still recite an abstract idea. For step 2A prong 2, the further element “wherein the user input indicates confirmation of modifying the identified entry” recites mere data gathering, which is insignificant extra-solution activity (MPEP 2106.05(g)). For step 2B, the claim as a whole does not amount to significantly more than the recited judicial exception. The element “wherein the user input indicates confirmation of modifying the identified entry” recites mere data gathering, which is recognized as well-understood, routine, and conventional activity in the art (see MPEP § 2106.05(d)(II)(i)). Even when considered in combination, the additional elements connect the abstract idea to a field of use, represent mere instructions to apply the abstract idea to a computer or represent insignificant extra-solution activity, which do not provide an inventive concept. The claim is not eligible under 35 U.S.C. 101. Regarding claim 17: For step 2A prong 1, claim 17 further limits claim 14 and the same elements in claim 17 still recite an abstract idea. For step 2A prong 1, claim 4 further limits claim 1 and the same elements in claim 4 still recite an abstract idea. The further element “wherein the predicting the modification comprises identifying one or more specific characteristics of the identified entry and other entries within the severity matrix data” further limits the abstract idea of claim 14 but it remains an abstract idea; in performing the predicting using judgement and opinion, a person could identify specific characteristics of entries. For step 2A prong 2, and step 2B, no further elements remain to be considered. The claim as a whole does not amount to significantly more than the recited judicial exception and is ineligible under 35 U.S.C. 101. Regarding claim 18: For step 2A prong 1, claim 18 further limits claim 14 and the same elements in claim 18 still recite an abstract idea. For step 2A prong 2, the further element “wherein the user input indicates change to the predicted modification to the identified entry to the severity matrix data” recites mere data gathering, which is insignificant extra-solution activity (MPEP 2106.05(g)). For step 2B, the claim as a whole does not amount to significantly more than the recited judicial exception. The element “wherein the user input indicates change to the predicted modification to the identified entry to the severity matrix data” recites mere data gathering, which is recognized as well-understood, routine, and conventional activity in the art (see MPEP § 2106.05(d)(II)(i)). Even when considered in combination, the additional elements connect the abstract idea to a field of use, represent mere instructions to apply the abstract idea to a computer or represent insignificant extra-solution activity, which do not provide an inventive concept. The claim is not eligible under 35 U.S.C. 101. Regarding claims 19-20: These claims recite “One or more non-transitory media storing instructions that, when executed by one or more processors, cause the one or more processors to perform steps comprising” steps that follow. Therefore, the claims recite a product, which is a statutory category of invention under step 1. The claims are otherwise analogous to claims 1-2, respectively, and are rejected by the same arguments. Regarding claim 21: For step 2A prong 1, claim 21 further limits claim 1 and the same elements in claim 21 still recite an abstract idea. For step 2A prong 2, the element “outputting a notification of the modification to the severity matrix data comprising the new entry” recites mere data output, which is insignificant extra-solution activity (MPEP 2106.05(g)). Step 2B: The claim as a whole does not amount to significantly more than the recited judicial exception. The element “outputting a notification of the modification to the severity matrix data comprising the predicted new entry” recites mere data output, which is recognized as well-understood, routine, and conventional activity in the art (see MPEP § 2106.05(d)(II)(i)). Even when considered in combination, the additional elements connect the abstract idea to a field of use, represent mere instructions to apply the abstract idea to a computer or represent insignificant extra-solution activity, which do not provide an inventive concept. The claim is not eligible under 35 U.S.C. 101. Regarding claim 22: For step 2A prong 1, claim 22 further limits claim 14 and the same elements in claim 22 still recite an abstract idea. For step 2A prong 2, the element “outputting a notification of the modification to the severity matrix data comprising the predicted new entry” recites mere data output, which is insignificant extra-solution activity (MPEP 2106.05(g)). Step 2B: The claim as a whole does not amount to significantly more than the recited judicial exception. The element “outputting a notification of the modification to the severity matrix data comprising the predicted new entry” recites mere data output, which is recognized as well-understood, routine, and conventional activity in the art (see MPEP § 2106.05(d)(II)(i)). Even when considered in combination, the additional elements connect the abstract idea to a field of use, represent mere instructions to apply the abstract idea to a computer or represent insignificant extra-solution activity, which do not provide an inventive concept. The claim is not eligible under 35 U.S.C. 101. Regarding claim 23: For step 2A prong 1, claim 23 further limits claim 18 and the same elements in claim 23 still recite an abstract idea. For step 2A prong 2, the element “wherein modifying the severity matrix data includes the change to the predicted modification to the identified entry” recites data updating at a high level of generality. No particular method of updating is described. Therefore the elements merely recite the use of a computer as a tool to perform the abstract idea, and are equivalent to adding the words “apply it” or the equivalent to the judicial exception (MPEP 2106.05(f)). For step 2B, the claim as a whole does not amount to significantly more than the recited judicial exception. The additional element “wherein modifying the severity matrix data includes the change to the predicted modification to the identified entry” recites mere instructions to apply the abstract idea. Even when considered in combination, the additional elements connect the abstract idea to a field of use, represent mere instructions to apply the abstract idea to a computer or represent insignificant extra-solution activity, which do not provide an inventive concept. The claim is not eligible under 35 U.S.C. 101. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 4-8, 10-15, and 17-23 rejected under 35 U.S.C. 103 over Goodwin et al., US Patent No. 12,169,794 (hereafter Goodwin) in view of Bulut at al., US Pre-Grant Publication No. 2021/0075814 (hereafter Bulut) and Panigrahi et al., US Pre-Grant Publication No. 2021/0065482 (hereafter Panigrahi). Regarding claim 1 and analogous claim 19: Goodwin teaches: “A method comprising”: Goodwin, col. 1, lines 39-40, “Various embodiments of the disclosed inventions relate to a computer-implemented method, comprising [A method comprising]:”; Goodwin, col. 24, lines 11-19, “Example computing systems and devices may include one or more processing units each with one or more processors, one or more memory units each with one or more memory devices, and one or more system buses that couple various components including memory units to processing units. Each memory device may include non-transient volatile storage media, non-volatile storage media, non-transitory storage media ( e.g., one or more volatile and/or non-volatile memories), etc.” “compiling, by a first computing device, ownership data, metric data, and severity matrix data as input data to a machine learning model data store”: Goodwin, col. 10, lines 1-4, “Database 130 may provide the provider system 110 with large sets of application data [ownership data, metric data, and severity matrix data] which may be filtered and processed [compiling] by the provider system 110 for use as training datasets [as input data to a machine learning model data store].” “wherein the ownership data comprises data representative of assets of an entity and data representative of relationships between the assets”: Goodwin, Table 1, “Flag to note if application has a CP Discrete Variable (coordination point) server assigned for all its assets [sample data item which is representative of assets of an entity and representative of relationships between the assets]” (bold only) “wherein the metric data comprises data representative of development operations tools metric data of the assets”: Goodwin, Table 1, “Flag to note if application has a meets the Discrete Variable requirement that all its assets should meet the 200 miles distance criterion [representative of … metric data of the assets] between production servers and CP servers” “and wherein the severity matrix data comprises a plurality of entries, wherein each entry comprises data representative of a severity of a consequence of a particular incident occurrence affecting the metric data”: Goodwin, Table 1, “ SEV12_FLAG; Flag to note if application had a severity 1 or 2 event; Response Variable SEV3_CNT; Number of severity 3 events in rolling 1 year time period; Continuous Variable SEV4_CNT; Number of severity 4 events in rolling 1 year time period; Continuous Variable SEV5_CNT; Number of severity 5 events in rolling 1 year time period; Continuous Variable SEV45_CNT; sum of severity 4 and 5 events in rolling 1 year time period; Continuous Variable [plurality of entries, wherein each entry comprises data representative of a severity of a consequence of a particular incident occurrence affecting the metric data].” “receiving, from a second computing device via a first machine learning model trained to recognize one or more relationships between the input data in the machine learning model data store, refinement data, wherein the refinement data updates the input data in the machine learning model data store”: Goodwin, col. 6, lines 44-63, “In various embodiments, a predictive model may be trained to provide a classifier capable of, for example, accepting, as inputs, states or features of one or more applications in an enterprise IT system and provide, as outputs, probabilities of subsequent high severity events. The predictive model may be trained, for example, using a training dataset that includes features of applications that were previously involved in high severity events [a first machine learning model trained to recognize one or more relationships between the input data in the machine learning model data store]. In some embodiments, the model may be retrained on a regular basis (such as each time a new high severity event is detected) using a training dataset that additionally includes features of the applications involved in the new high severity event. In certain embodiments, the model may be retrained periodically (e.g., every week, month, quarter, or year). In some embodiments, the retraining may use training datasets [wherein the refinement data updates the input data in the machine learning model data store] that account for actual outcomes as compared with predicted likelihoods of high severity events [receiving, from a second computing device … refinement data]. The parameters of the predictive model may be adjusted or updated based on new data that may include, for example, prior predictions, user reprioritizations, newly-added features and state data, etc.” “predicting, via a second machine learning model trained to recognize one or more relationships between the refinement data and new metric data representative of a new development operations tools metric data of the assets, a plurality of severity designations to assign to occurrence of an incident associated with the new metric data for a new entry to add to the severity matrix data”: Goodwin, col. 6, lines 44-63, “In various embodiments, a predictive model may be trained to provide a classifier capable of, for example, accepting, as inputs, states or features of one or more applications in an enterprise IT system and provide, as outputs, probabilities of subsequent high severity events. The predictive model may be trained, for example, using a training dataset that includes features of applications that were previously involved in high severity events. In some embodiments, the model may be retrained on a regular basis (such as each time a new high severity event is detected) using a training dataset that additionally includes features of the applications involved in the new high severity event. In certain embodiments, the model may be retrained periodically (e.g., every week, month, quarter, or year). In some embodiments, the retraining may use training datasets that account for actual outcomes as compared with predicted likelihoods of high severity events. The parameters of the predictive model may be adjusted or updated based on new data that may include, for example, prior predictions, user reprioritizations, newly-added features and state data, etc. [predicting, via a second machine learning model trained to recognize one or more relationships between the refinement data and new metric data representative of a new development operations tools metric data of the assets, a plurality of severity designations to assign to occurrence of an incident associated with the new metric data for a new entry to add to the severity matrix data] “wherein the new entry comprises data indicating: an incident type; the plurality of severity designations”: Goodwin, col. 4, lines 32-42, “Severity types may be created to classify the severity of events that may be caused by the failure of various applications [the new entry comprises data indicating: an incident type]. High severity events may be so rare and severe that they may be a proxy for information technology and operational disruption. In the present disclosure, severities are categorized along a spectrum from type 1 to type 5 in decreasing order of severity [the plurality of severity designations]. Severity types 1 and 2 will be classified as high severity 40 events, however it should be appreciated that other classifiers, ranks, and identifiers may be used to identify high severity events.” “modifying, based on user input associated with the new entry, the severity matrix data to include the new entry”: Goodwin, col. 2, lines 36-45, “displaying the ranked probability of the high-severity event for a number of applications in the application set on one or more pages of a graphical user interface, the graphical user interface having one or more selectable graphical components; and in response to a user interacting with the one or more graphical components [based on user input associated with the predicted new entry], modifying the number of displayed applications [modifying, based on user input associated with the predicted new entry, the severity matrix data to include the predicted new entry, interpreted as including modifying the display of an application’s severity data], the ranking of each of the applications in the application set, and the probability of the high-severity event for one or more applications in the application set.” “and causing, based on the user input associated with the new entry, at least one modification to the second machine learning model“: Goodwin, col. 21, lines 24-41, “In an embodiment, applications may be classified using the CAT score based on one or more user's experience. For example, user experience with an application may result in the user (or group of users) classifying the application as failing 60% of the time. Thus, the application may be classified as being a medium application. A different group may classify applications according to different experiences. For instance, the same application may be classified differently to a different group of users. For example, an application used every day may be considered a critical application to that group of users. Additionally or alternatively, users may manually classify and reclassify applications based on whether the application interfaces with one or more third parties [based on the user input associated with the predicted new entry]. For example, users may classify applications that interact with third parties directly as critical application because the failure of the application may disrupt third party experiences with the application”; Goodwin, col. 6, lines 44-63, “In various embodiments, a predictive model may be trained to provide a classifier capable of, for example, accepting, as inputs, states or features of one or more applications in an enterprise IT system and provide, as outputs, probabilities of subsequent high severity events. The predictive model may be trained, for example, using a training dataset that includes features of applications that were previously involved in high severity events. In some embodiments, the model may be retrained on a regular basis (such as each time a new high severity event is detected) using a training dataset that additionally includes features of the applications involved in the new high severity event. In certain embodiments, the model may be retrained periodically (e.g., every week, month, quarter, or year). In some embodiments, the retraining may use training datasets that account for actual outcomes as compared with predicted likelihoods of high severity events [causing … at least one modification to the second machine learning model]. The parameters of the predictive model may be adjusted or updated based on new data that may include, for example, prior predictions, user reprioritizations, newly-added features and state data, etc.” Goodwin does not explicitly teach: (bold only) “wherein the metric data comprises data representative of development operations tools metric data of the assets” “an issue level associated with each of the plurality of severity designations, wherein the issue level indicates a threshold amount affected by the incident type in relation to the new metric data, each issue level comprising a different threshold amount” Bulut teaches (bold only) “wherein the metric data comprises data representative of development operations tools metric data of the assets”: Bulut, paragraph 0062, “Metric assignment component 108 can employ such a model defined above (e.g., LSTM, GRU, CNN, etc.) to assign one or more of such risk assessment metrics defined above based on vulnerability data of a compliance process, where such a compliance process can include, but is not limited to: a security process; a patching process; an identity and access management process; a development and operations (DevOps) process [metric data comprises data representative of development operations tools metric data of the assets]; a development, security, and operations (DevSecOps) process; a runtime process; and/or another compliance process. In some embodiments, examples of such vulnerability data of such a compliance process can include, but is not limited to, vulnerability descriptions, vulnerability categories, and/or vulnerability scores corresponding to vulnerabilities ( e.g., defects) of the compliance process.” Bulut and Goodwin are analogous arts as they are both related to risk assessment of IT processes. It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to have combined the use of developer operations tools data from Bulut to the teachings of Goodwin to arrive at the present invention, in order to help assess the vulnerabilities of the system under analysis, as stated in Bulut, paragraph 0062, “Metric assignment component 108 can employ such a model defined above (e.g., LSTM, GRU, CNN, etc.) to assign one or more of such risk assessment metrics defined above based on vulnerability data of a compliance process, where such a compliance process can include, but is not limited to: a security process; a patching process; an identity and access management process; a development and operations (DevOps) process; a development, security, and operations (DevSecOps) process; a runtime process; and/or another compliance process. In some embodiments, examples of such vulnerability data of such a compliance process can include, but is not limited to, vulnerability descriptions, vulnerability categories, and/or vulnerability scores corresponding to vulnerabilities ( e.g., defects) of the compliance process.” Panigrahi teaches “an issue level associated with each of the plurality of severity designations, wherein the issue level indicates a threshold amount affected by the incident type in relation to the new metric data, each issue level comprising a different threshold amount”: Panigrahi, paragraph 0053, “In the block 415, the computer 105 determines an impact severity for the wheel 155 determined in the block 410 to have experienced an impact. An impact severity describes a degree of significance or severity of a wheel impact. The impact severity can be quantified, e.g., expressed according to a numeric scale. In Table 4 below, which provides an example of conditions that can be applied by the computer 105 to determine impact severity, the impact severity is ranked on a scale of zero to five. Units of acceleration in the below table are standard gravities, sometimes notated "G" or "g." The expression blim refers to an empirically established brake torque limit, e.g., by subjecting wheels to specified impacts and noting brake torques”; Panigrahi, Table 4, PNG media_image1.png 217 491 media_image1.png Greyscale [showing impact severities (issue levels) categorized by condition thresholds, hence, an issue level associated with each of the plurality of severity designations, wherein the issue level indicates a threshold amount affected by the incident type in relation to the new metric data, each issue level comprising a different threshold amount]. Panigrahi and Goodwin are analogous arts as they are both related to severity analysis. It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to have combined the threshold-conditioned issue levels of Panigrahi with the teachings of Goodwin to arrive at the present invention, in order to make decisions based on an severity issue level, as stated in Panigrahi, paragraph 0013, “Transmitting the message can include transmitting the message to a selected controller based on an impact severity level and an identity of the wheel impacted. The controller, based on receiving the message, can adjust monitoring of the vehicle component. Adjusting monitoring can include an adjustment based on an impact severity level and the identity of the wheel impacted.” Regarding claim 2 and analogous claim 20: Goodwin as modified by Bulut and Panigrahi teaches “The method of claim 1.” Goodwin further teaches “wherein the user input indicates a confirmation of adding the new entry to the severity matrix data”: Goodwin, col. 2, lines 36-45, “displaying the ranked probability of the high-severity event for a number of applications in the application set on one or more pages of a graphical user interface, the graphical user interface having one or more selectable graphical components; and in response to a user interacting with the one or more graphical components [the user input indicates a confirmation], modifying the number of displayed applications [adding the new entry to the severity matrix data, interpreted as including modifying the display of an application’s severity data], the ranking of each of the applications in the application set, and the probability of the high-severity event for one or more applications in the application set.” Regarding claim 4: Goodwin as modified by Bulut and Panigrahi teaches “The method of claim 1.” Goodwin further teaches “wherein the predicting the new entry comprises identifying one or more specific characteristics of entries within the severity matrix data and the new metric data”: Goodwin, col. 16, lines 36-42, “In various embodiments, step 210-1 may be performed by the provider system 110 to determine parameters for the independent variables. Parameters may be determined to tune the independent variables ( e.g., the features most indicative of high severity events that were determined in process 208 [identifying one or more specific characteristics of entries within the severity matrix data and the new metric data]) using any appropriate technique of tuning parameters.” Regarding claim 5: Goodwin as modified by Bulut and Panigrahi teaches “The method of claim 4.” Goodwin further teaches “wherein the one or more specific characteristics include one or more of cloud infrastructure, physical infrastructure, a recovery time objective, or a customer base”: Goodwin, col. 16, lines 24-29, “The CP to production server ratio independent variable conveys an infrastructure footprint. That is, the ratio may measure the hardware associated with each application [physical infrastructure]. An application relying on many servers, databases and the like may have a noticeably different ratio than an application relying on only one server.” Regarding claim 6: Goodwin as modified by Bulut and Panigrahi teaches “The method of claim 1.” Goodwin further teaches “wherein the first and second computing devices are the same computing device”: Goodwin, col. 6, lines 44-63, “In various embodiments, a predictive model may be trained to provide a classifier capable of, for example, accepting, as inputs, states or features of one or more applications in an enterprise IT system and provide, as outputs, probabilities of subsequent high severity events. The predictive model may be trained, for example, using a training dataset that includes features of applications that were previously involved in high severity events. In some embodiments, the model may be retrained [second model is a retrained first model, hence, the first and second computing devices are the same computing device] on a regular basis (such as each time a new high severity event is detected) using a training dataset that additionally includes features of the applications involved in the new high severity event. In certain embodiments, the model may be retrained periodically (e.g., every week, month, quarter, or year). In some embodiments, the retraining may use training datasets that account for actual outcomes as compared with predicted likelihoods of high severity events. The parameters of the predictive model may be adjusted or updated based on new data that may include, for example, prior predictions, user reprioritizations, newly-added features and state data, etc.” Regarding claim 7: Goodwin as modified by Bulut and Panigrahi teaches “The method of claim 1.” Goodwin further teaches “wherein the user input indicates a modification to the new entry to the severity matrix data”: Goodwin, col. 21, lines 24-41, “In an embodiment, applications may be classified using the CAT score based on one or more user's experience. For example, user experience with an application may result in the user (or group of users) classifying the application as failing 60% of the time. Thus, the application may be classified as being a medium application. A different group may classify applications according to different experiences. For instance, the same application may be classified differently to a different group of users. For example, an application used every day may be considered a critical application to that group of users. Additionally or alternatively, users may manually classify and reclassify applications based on whether the application interfaces with one or more third parties [wherein the user input indicates a modification to the new entry to the severity matrix data]. For example, users may classify applications that interact with third parties directly as critical application because the failure of the application may disrupt third party experiences with the application.” Regarding claim 8: Goodwin as modified by Bulut and Panigrahi teaches “The method of claim 7.” Goodwin further teaches “wherein modifying the severity matrix data includes the modified new entry”: Goodwin, col. 21, lines 24-41, “In an embodiment, applications may be classified using the CAT score based on one or more user's experience. For example, user experience with an application may result in the user (or group of users) classifying the application as failing 60% of the time. Thus, the application may be classified as being a medium application. A different group may classify applications according to different experiences. For instance, the same application may be classified differently to a different group of users. For example, an application used every day may be considered a critical application to that group of users. Additionally or alternatively, users may manually classify and reclassify applications based on whether the application interfaces with one or more third parties. For example, users may classify applications that interact with third parties directly as critical application because the failure of the application may disrupt third party experiences with the application”; Goodwin, col. 21, lines 42-51, “The CAT scores may be fed as an input into the predictive model 115 [wherein modifying the severity matrix data includes the modified new entry]. For instance, one or more features may be extracted from the application dataset that represent or are otherwise associated with the CAT score ( or other ranking system) of each of the applications in the dataset. The CAT score may be treated as an independent variable of the model and considered in the determination of the probability of applications causing high severity events. Alternatively, as shown, the CAT scores may be displayed in conjunction with the results from predictive model 115.” Regarding claim 10: Goodwin as modified by Bulut and Panigrahi teaches “The method of claim 1.” Goodwin further teaches “receiving, by the first computing device, the ownership data”: Goodwin, col. 10, lines 1-4, “Database 130 may provide the provider system 110 with large sets of application data [receiving, by the first computing device] which may be filtered and processed by the provider system 110 for use as training datasets”; Goodwin, Table 1, “Flag to note if application has a CP Discrete Variable (coordination point) server assigned for all its assets [sample data item which includes ownership data].” Regarding claim 11: Goodwin as modified by Bulut and Panigrahi teaches “The method of claim 1.” Goodwin further teaches “receiving, by the first computing device, the metric data”: Goodwin, col. 10, lines 1-4, “Database 130 may provide the provider system 110 with large sets of application data [receiving, by the first computing device] which may be filtered and processed by the provider system 110 for use as training datasets”; Goodwin, Table 1, “Flag to note if application has a meets the Discrete Variable requirement that all its assets should meet the 200 miles distance criterion [metric data] between production servers and CP servers” Regarding claim 12: Goodwin as modified by Bulut and Panigrahi teaches “The method of claim 1.” Goodwin further teaches “receiving, by the first computing device, the severity matrix data”: Goodwin, col. 10, lines 1-4, “Database 130 may provide the provider system 110 with large sets of application data [receiving, by the first computing device] which may be filtered and processed by the provider system 110 for use as training datasets”; Goodwin, col. 4, lines 32-52, “Severity types may be created to classify the severity of events that may be caused by the failure of various applications. High severity events may be so rare and severe that they may be a proxy for information technology and operational disruption. In the present disclosure, severities are categorized along a spectrum from type 1 to type 5 in decreasing order of severity. Severity types 1 and 2 will be classified as high severity events, however it should be appreciated that other classifiers, ranks, and identifiers may be used to identify high severity events. Severity types 3 , 4 and 5 may be classified as low severity events. Severity type 5 events may be events with a low probability of information technology and operational disruption. For instance, severity type 5 events may cause inconveniences. An example of a severity type 5 event may be a PC or hard drive issue. Other types of severity classifiers, identifiers, and rankings may be created to describe and identify a source (such as an application) that has the ability to disrupt an enterprise (e.g., by failing and causing information technology and operational disruption) [severity matrix data].” Regarding claim 13: Goodwin as modified by Bulut and Panigrahi teaches “The method of claim 1.” Goodwin further teaches “receiving, by the second computing device, the new metric data”: Goodwin, col. 10, lines 1-4, “Database 130 may provide the provider system 110 with large sets of application data [receiving, by the second computing device] which may be filtered and processed by the provider system 110 for use as training datasets”; Goodwin, col. 6, lines 44-63, “In various embodiments, a predictive model may be trained to provide a classifier capable of, for example, accepting, as inputs, states or features of one or more applications in an enterprise IT system and provide, as outputs, probabilities of subsequent high severity events. The predictive model may be trained, for example, using a training dataset that includes features of applications that were previously involved in high severity events. In some embodiments, the model may be retrained on a regular basis (such as each time a new high severity event is detected) using a training dataset that additionally includes features of the applications involved in the new high severity event [receiving, by the second computing device, the new metric data]. In certain embodiments, the model may be retrained periodically (e.g., every week, month, quarter, or year). In some embodiments, the retraining may use training datasets that account for actual outcomes as compared with predicted likelihoods of high severity events. The parameters of the predictive model may be adjusted or updated based on new data that may include, for example, prior predictions, user reprioritizations, newly-added features and state data, etc.” Regarding claim 14: Goodwin teaches: “A method comprising”: Goodwin, col. 1, lines 39-40, “Various embodiments of the disclosed inventions relate to a computer-implemented method, comprising [A method comprising]:”; Goodwin, col. 24, lines 11-19, “Example computing systems and devices may include one or more processing units each with one or more processors, one or more memory units each with one or more memory devices, and one or more system buses that couple various components including memory units to processing units. Each memory device may include non-transient volatile storage media, non-volatile storage media, non-transitory storage media ( e.g., one or more volatile and/or non-volatile memories), etc.” “compiling, by a first computing device, ownership data, metric data, and severity matrix data as input data to a machine learning model data store”: Goodwin, col. 10, lines 1-4, “Database 130 may provide the provider system 110 with large sets of application data [ownership data, metric data, and severity matrix data] which may be filtered and processed [compiling] by the provider system 110 for use as training datasets [as input data to a machine learning model data store].” “wherein the ownership data comprises data representative of assets of an entity and data representative of relationships between the assets”: Goodwin, Table 1, “Flag to note if application has a CP Discrete Variable (coordination point) server assigned for all its assets [sample data item which is representative of assets of an entity and representative of relationships between the assets]” (bold only) “wherein the metric data comprises data representative of development operations tools metric data of the assets”: Goodwin, Table 1, “Flag to note if application has a meets the Discrete Variable requirement that all its assets should meet the 200 miles distance criterion [representative of … metric data of the assets] between production servers and CP servers” “and wherein the severity matrix data comprises a plurality of entries, wherein each entry comprises data representative of a severity of a consequence of a particular incident occurrence affecting the metric data”: Goodwin, Table 1, “ SEV12_FLAG; Flag to note if application had a severity 1 or 2 event; Response Variable SEV3_CNT; Number of severity 3 events in rolling 1 year time period; Continuous Variable SEV4_CNT; Number of severity 4 events in rolling 1 year time period; Continuous Variable SEV5_CNT; Number of severity 5 events in rolling 1 year time period; Continuous Variable SEV45_CNT; sum of severity 4 and 5 events in rolling 1 year time period; Continuous Variable [plurality of entries, wherein each entry comprises data representative of a severity of a consequence of a particular incident occurrence affecting the metric data].” “predicting, via the second machine learning model, a modification to a plurality of severity designations associated with an incident type associated with the identified entry”: Goodwin, col. 6, lines 44-63, “In various embodiments, a predictive model may be trained to provide a classifier capable of, for example, accepting, as inputs, states or features of one or more applications in an enterprise IT system and provide, as outputs, probabilities of subsequent high severity events. The predictive model may be trained, for example, using a training dataset that includes features of applications that were previously involved in high severity events. In some embodiments, the model may be retrained on a regular basis (such as each time a new high severity event is detected) using a training dataset that additionally includes features of the applications involved in the new high severity event. In certain embodiments, the model may be retrained periodically (e.g., every week, month, quarter, or year). In some embodiments, the retraining may use training datasets that account for actual outcomes as compared with predicted likelihoods of high severity events. The parameters of the predictive model may be adjusted or updated based on new data that may include, for example, prior predictions, user reprioritizations, newly-added features and state data, etc. [predicting, via the second machine learning model, a modification to a plurality of severity designations associated with an incident type associated with the identified entry]” “the modification comprising data indicating: the incident type associated with the identified entry; a plurality of severity designations”: Goodwin, col. 4, lines 32-42, “Severity types may be created to classify the severity of events that may be caused by the failure of various applications [the modification comprising data indicating: the incident type associated with the identified entry]. High severity events may be so rare and severe that they may be a proxy for information technology and operational disruption. In the present disclosure, severities are categorized along a spectrum from type 1 to type 5 in decreasing order of severity [a plurality of severity designations]. Severity types 1 and 2 will be classified as high severity 40 events, however it should be appreciated that other classifiers, ranks, and identifiers may be used to identify high severity events.” “modifying, based on user input associated with the predicted modification to the identified entry, the severity matrix data to include the predicted modification to the identified entry”: Goodwin, col. 2, lines 36-45, “displaying the ranked probability of the high-severity event for a number of applications in the application set on one or more pages of a graphical user interface, the graphical user interface having one or more selectable graphical components; and in response to a user interacting with the one or more graphical components [based on user input associated with the predicted modification to the identified entry], modifying the number of displayed applications [modifying, based on user input associated with the predicted modification to the identified entry, the severity matrix data to include the predicted modification to the identified entry, interpreted as including modifying the display of an application’s severity data], the ranking of each of the applications in the application set, and the probability of the high-severity event for one or more applications in the application set.” “and causing, based on the user input associated with the predicted modification to the identified entry, at least one modification to the second machine learning model“: Goodwin, col. 21, lines 24-41, “In an embodiment, applications may be classified using the CAT score based on one or more user's experience. For example, user experience with an application may result in the user (or group of users) classifying the application as failing 60% of the time. Thus, the application may be classified as being a medium application. A different group may classify applications according to different experiences. For instance, the same application may be classified differently to a different group of users. For example, an application used every day may be considered a critical application to that group of users. Additionally or alternatively, users may manually classify and reclassify applications based on whether the application interfaces with one or more third parties [based on the user input associated with the predicted modification to the identified entry]. For example, users may classify applications that interact with third parties directly as critical application because the failure of the application may disrupt third party experiences with the application”; Goodwin, col. 6, lines 44-63, “In various embodiments, a predictive model may be trained to provide a classifier capable of, for example, accepting, as inputs, states or features of one or more applications in an enterprise IT system and provide, as outputs, probabilities of subsequent high severity events. The predictive model may be trained, for example, using a training dataset that includes features of applications that were previously involved in high severity events. In some embodiments, the model may be retrained on a regular basis (such as each time a new high severity event is detected) using a training dataset that additionally includes features of the applications involved in the new high severity event. In certain embodiments, the model may be retrained periodically (e.g., every week, month, quarter, or year). In some embodiments, the retraining may use training datasets that account for actual outcomes as compared with predicted likelihoods of high severity events [causing … at least one modification to the second machine learning model]. The parameters of the predictive model may be adjusted or updated based on new data that may include, for example, prior predictions, user reprioritizations, newly-added features and state data, etc.” Goodwin does not explicitly teach: (bold only) “wherein the metric data comprises data representative of development operations tools metric data of the assets” “identifying an entry of the development operations tools metric data for input to a second machine learning training model trained to recognize one or more relationships between the input data in the machine learning model data store and the identified entry” “an issue level associated with each of the plurality of severity designations, wherein the issue level indicates a threshold amount affected by the incident type in relation to the development operations tools metric data, each issue level comprising a different threshold amount” Bulut teaches: (bold only) “wherein the metric data comprises data representative of development operations tools metric data of the assets” and (bold only) “an issue level associated with each of the plurality of severity designations, wherein the issue level indicates a threshold amount affected by the incident type in relation to the development operations tools metric data, each issue level comprising a different threshold amount”: Bulut, paragraph 0062, “Metric assignment component 108 can employ such a model defined above (e.g., LSTM, GRU, CNN, etc.) to assign one or more of such risk assessment metrics defined above based on vulnerability data of a compliance process, where such a compliance process can include, but is not limited to: a security process; a patching process; an identity and access management process; a development and operations (DevOps) process [metric data comprises data representative of development operations tools metric data of the assets] [development operations tools metric data]; a development, security, and operations (DevSecOps) process; a runtime process; and/or another compliance process. In some embodiments, examples of such vulnerability data of such a compliance process can include, but is not limited to, vulnerability descriptions, vulnerability categories, and/or vulnerability scores corresponding to vulnerabilities ( e.g., defects) of the compliance process.” “identifying an entry of the development operations tools metric data for input to a second machine learning training model trained to recognize one or more relationships between the input data in the machine learning model data store and the identified entry”: Bulut, paragraph 0065, “In an example, metric assignment component 108 can employ model 300a described below with reference to FIG. 3A to assign one or more risk assessment metrics of one or more (e.g., different) compliance process vulnerability scoring systems based on such vulnerability data defined above of one or more (e.g., different) compliance processes. In another example, metric assignment component 108 can employ model 300b described below with reference to FIG. 3b to assign one or more risk assessment metrics of one or more (e.g., different) compliance process vulnerability scoring systems based on such vulnerability data defined above of one or more ( e.g., different) compliance processes. In this example, metric assignment component 108 can employ model 300b to assign such one or more risk assessment metrics [identifying an entry of the development operations tools metric data] based on transfer learning, where model 300b can learn to assign risk assessment metrics from a certain compliance process vulnerability scoring system using information it has learned previously in assigning risk assessment metrics from other compliance process vulnerability scoring systems [input to a second machine learning training model trained to recognize one or more relationships between the input data in the machine learning model data store and the identified entry]. In another example, metric assignment component 108 can employ deep learning model 412 described below with reference to FIG. 4C to assign one or more risk assessment metrics of one or more ( e.g., different compliance process vulnerability scoring systems based on such vulnerability data defined above of one or more (e.g., different) compliance processes.” Bulut and Goodwin are analogous arts as they are both related to risk assessment of IT processes. It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to have combined the use of developer operations tools data from Bulut to the teachings of Goodwin to arrive at the present invention, in order to help assess the vulnerabilities of the system under analysis, as stated in Bulut, paragraph 0062, “Metric assignment component 108 can employ such a model defined above (e.g., LSTM, GRU, CNN, etc.) to assign one or more of such risk assessment metrics defined above based on vulnerability data of a compliance process, where such a compliance process can include, but is not limited to: a security process; a patching process; an identity and access management process; a development and operations (DevOps) process; a development, security, and operations (DevSecOps) process; a runtime process; and/or another compliance process. In some embodiments, examples of such vulnerability data of such a compliance process can include, but is not limited to, vulnerability descriptions, vulnerability categories, and/or vulnerability scores corresponding to vulnerabilities ( e.g., defects) of the compliance process.” Panigrahi teaches (bold only) “an issue level associated with each of the plurality of severity designations, wherein the issue level indicates a threshold amount affected by the incident type in relation to the development operations tools metric data, each issue level comprising a different threshold amount”: Panigrahi, paragraph 0053, “In the block 415, the computer 105 determines an impact severity for the wheel 155 determined in the block 410 to have experienced an impact. An impact severity describes a degree of significance or severity of a wheel impact. The impact severity can be quantified, e.g., expressed according to a numeric scale. In Table 4 below, which provides an example of conditions that can be applied by the computer 105 to determine impact severity, the impact severity is ranked on a scale of zero to five. Units of acceleration in the below table are standard gravities, sometimes notated "G" or "g." The expression blim refers to an empirically established brake torque limit, e.g., by subjecting wheels to specified impacts and noting brake torques”; Panigrahi, Table 4, PNG media_image1.png 217 491 media_image1.png Greyscale [showing impact severities (issue levels) categorized by condition thresholds, hence, an issue level associated with each of the plurality of severity designations, wherein the issue level indicates a threshold amount affected by the incident type in relation to the … metric data, each issue level comprising a different threshold amount]. Panigrahi and Goodwin are analogous arts as they are both related to severity analysis. It would have been obvious to a person having ordinary skill in the art prior to the effective filing date of the claimed invention to have combined the threshold-conditioned issue levels of Panigrahi with the teachings of Goodwin to arrive at the present invention, in order to make decisions based on an severity issue level, as stated in Panigrahi, paragraph 0013, “Transmitting the message can include transmitting the message to a selected controller based on an impact severity level and an identity of the wheel impacted. The controller, based on receiving the message, can adjust monitoring of the vehicle component. Adjusting monitoring can include an adjustment based on an impact severity level and the identity of the wheel impacted.” Regarding claim 15: Goodwin as modified by Bulut and Panigrahi teaches “The method of claim 14.” Goodwin further teaches “wherein the user input indicates a confirmation of modifying the identified entry”: Goodwin, col. 2, lines 36-45, “displaying the ranked probability of the high-severity event for a number of applications in the application set on one or more pages of a graphical user interface, the graphical user interface having one or more selectable graphical components; and in response to a user interacting with the one or more graphical components [the user input indicates a confirmation], modifying the number of displayed applications [modifying the identified entry, interpreted as including modifying the display of an application’s severity data], the ranking of each of the applications in the application set, and the probability of the high-severity event for one or more applications in the application set.” Regarding claim 17: Goodwin as modified by Bulut and Panigrahi teaches “The method of claim 14.” Goodwin further teaches “predicting the modification comprises identifying one or more specific characteristics of the identified entry and other entries within the severity matrix data”: Goodwin, col. 16, lines 36-42, “In various embodiments, step 210-1 may be performed by the provider system 110 to determine parameters for the independent variables. Parameters may be determined to tune the independent variables ( e.g., the features most indicative of high severity events that were determined in process 208 [identifying one or more specific characteristics of the identified entry and other entries within the severity matrix data]) using any appropriate technique of tuning parameters.” Regarding claim 18: Goodwin as modified by Bulut and Panigrahi teaches “The method of claim 14.” Goodwin further teaches “wherein the user input indicates a change to the predicted modification to the identified entry to the severity matrix data”: Goodwin, col. 21, lines 24-41, “In an embodiment, applications may be classified using the CAT score based on one or more user's experience. For example, user experience with an application may result in the user (or group of users) classifying the application as failing 60% of the time. Thus, the application may be classified as being a medium application. A different group may classify applications according to different experiences. For instance, the same application may be classified differently to a different group of users. For example, an application used every day may be considered a critical application to that group of users. Additionally or alternatively, users may manually classify and reclassify applications based on whether the application interfaces with one or more third parties [wherein the user input indicates a change to the predicted modification to the identified entry to the severity matrix data]. For example, users may classify applications that interact with third parties directly as critical application because the failure of the application may disrupt third party experiences with the application.” Regarding claim 21: Goodwin as modified by Bulut and Panigrahi teaches “The method of claim 1.” Goodwin further teaches “outputting a notification of the modification to the severity matrix data comprising the new entry”: Goodwin, col. 2, lines 23-45, “Various embodiments of the disclosed inventions relate to a computer-implemented method, comprising: determining, based on a received trigger, a probability of a future event for each application in an application set, wherein determining the probability of the future event for each application in the application set comprises feeding one or more features to a predictive model, the one or more features corresponding to features of each of the applications in the application set, the predictive model tuned to receive the one or more features corresponding to applications in the application set and provide the probability of the future event for each application in the application set; ranking the probability of the high-severity event for each application in the application set according to the predictive model; displaying the ranked probability of the high-severity event for a number of applications in the application set on one or more pages of a graphical user interface [outputting a notification of the modification to the severity matrix data comprising the predicted new entry], the graphical user interface having one or more selectable graphical components; and in response to a user interacting with the one or more graphical components, modifying the number of displayed applications, the ranking of each of the applications in the application set, and the probability of the high-severity event for one or more applications in the application set.” Regarding claim 22: Goodwin as modified by Bulut and Panigrahi teaches “The method of claim 14.” Goodwin further teaches “outputting a notification of the modification to the severity matrix data comprising the predicted modification to the identified entry”: Goodwin, col. 2, lines 23-45, “Various embodiments of the disclosed inventions relate to a computer-implemented method, comprising: determining, based on a received trigger, a probability of a future event for each application in an application set, wherein determining the probability of the future event for each application in the application set comprises feeding one or more features to a predictive model, the one or more features corresponding to features of each of the applications in the application set, the predictive model tuned to receive the one or more features corresponding to applications in the application set and provide the probability of the future event for each application in the application set; ranking the probability of the high-severity event for each application in the application set according to the predictive model; displaying the ranked probability of the high-severity event for a number of applications in the application set on one or more pages of a graphical user interface [outputting a notification of the modification to the severity matrix data comprising the predicted modification to the identified entry], the graphical user interface having one or more selectable graphical components; and in response to a user interacting with the one or more graphical components, modifying the number of displayed applications, the ranking of each of the applications in the application set, and the probability of the high-severity event for one or more applications in the application set.” Regarding claim 23: Goodwin as modified by Bulut and Panigrahi teaches “The method of claim 18.” Goodwin further teaches “wherein modifying the severity matrix data includes the change to the predicted modification to the identified entry”: Goodwin, col. 21, lines 24-41, “In an embodiment, applications may be classified using the CAT score based on one or more user's experience. For example, user experience with an application may result in the user (or group of users) classifying the application as failing 60% of the time. Thus, the application may be classified as being a medium application. A different group may classify applications according to different experiences. For instance, the same application may be classified differently to a different group of users. For example, an application used every day may be considered a critical application to that group of users. Additionally or alternatively, users may manually classify and reclassify [wherein modifying the severity matrix data includes the change to the predicted modification to the identified entry] applications based on whether the application interfaces with one or more third parties. For example, users may classify applications that interact with third parties directly as critical application because the failure of the application may disrupt third party experiences with the application.” Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Brown et al., US Pre-Grant Publication No. 2021/0037028, discloses a method in which scores associated with incidents are binned into a fixed range of categories, similar to assigning an issue level based on a threshold. Any inquiry concerning this communication or earlier communications from the examiner should be directed to VINCENT SPRAUL whose telephone number is (703) 756-1511. The examiner can normally be reached M-F 9:00 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, MICHAEL HUNTLEY can be reached at (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /VAS/ Examiner, Art Unit 2129 /MICHAEL J HUNTLEY/Supervisory Patent Examiner, Art Unit 2129
Read full office action

Prosecution Timeline

May 04, 2022
Application Filed
Jun 05, 2025
Non-Final Rejection — §101, §103
Jul 29, 2025
Examiner Interview Summary
Jul 29, 2025
Applicant Interview (Telephonic)
Aug 29, 2025
Response Filed
Sep 18, 2025
Final Rejection — §101, §103
Nov 04, 2025
Applicant Interview (Telephonic)
Nov 04, 2025
Examiner Interview Summary
Nov 19, 2025
Response after Non-Final Action
Dec 08, 2025
Request for Continued Examination
Dec 18, 2025
Response after Non-Final Action
Feb 20, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591634
COMPOSITE EMBEDDING SYSTEMS AND METHODS FOR MULTI-LEVEL GRANULARITY SIMILARITY RELEVANCE SCORING
2y 5m to grant Granted Mar 31, 2026
Patent 12591796
INTELLIGENT DISTANCE PROMPTING
2y 5m to grant Granted Mar 31, 2026
Patent 12572620
RELIABLE INFERENCE OF A MACHINE LEARNING MODEL
2y 5m to grant Granted Mar 10, 2026
Patent 12566974
Method, System, and Computer Program Product for Knowledge Graph Based Embedding, Explainability, and/or Multi-Task Learning
2y 5m to grant Granted Mar 03, 2026
Patent 12547616
SEMANTIC REASONING FOR TABULAR QUESTION ANSWERING
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
59%
Grant Probability
94%
With Interview (+34.7%)
4y 6m
Median Time to Grant
High
PTA Risk
Based on 34 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month