Prosecution Insights
Last updated: April 19, 2026
Application No. 18/111,293

INFORMATION TECHNOLOGY ISSUE SCORING AND VERSION RECOMMENDATION

Non-Final OA §103
Filed
Feb 17, 2023
Examiner
RUSIN, KAYO LISA
Art Unit
2114
Tech Center
2100 — Computer Architecture & Software
Assignee
Bugzero Inc.
OA Round
3 (Non-Final)
91%
Grant Probability
Favorable
3-4
OA Rounds
2y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 91% — above average
91%
Career Allow Rate
21 granted / 23 resolved
+36.3% vs TC avg
Moderate +13% lift
Without
With
+13.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
10 currently pending
Career history
33
Total Applications
across all art units

Statute-Specific Performance

§101
15.3%
-24.7% vs TC avg
§103
41.9%
+1.9% vs TC avg
§102
16.3%
-23.7% vs TC avg
§112
26.1%
-13.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 23 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are pending. Claim 21 is added. Claim 2 is canceled. Claims 1, 3-21 are rejected. Response to Arguments Applicant's arguments filed September 22nd, 2025 have been fully considered. Applicant’s arguments with respect to 35 U.S.C. 112(a) have been fully considered and are persuasive. Although “unstructured issue information” is not explicitly mentioned in the Specification, it is implied in its collection of data from disparate sources and needing to generate a issue data structure after the collection. The rejection has been withdrawn. In regards to arguments pertaining to 35 U.S.C. 112(b), necessary amendments have been made in order to clarify which “received indication” the claim limitation is referring to. The rejection has been withdrawn. In regards to arguments pertaining to 35 U.S.C. 103, the argument is not persuasive and the Examiner maintains the rejection. The Applicant argues that prior art, Wang, fails to teach or suggest at least “a set of issues associated with [a] version,” but the Examiner disagrees. Rao teaches here: (Col 18, line 24-28) “…with any of the updates described above, a set of parameters may be associated with any given update related to any particular component of the software deployment. (Col 18, line 30-52) “By way of further example, parameters may include, but are not limited to, a type or subtype… criticality of an affected application or component… severity (e.g., extent of a security vulnerability, weakness, threat, or other potential or actual compromise); accessibility of a given service, application, or component, such as with respect to a corresponding security issue (e.g., attack surface, threat vector, etc); data classification (e.g., level of access and/or sensitivity of potentially affected data); compliance factor… level of risk, including a likelihood of failure, likelihood of security compromise, severity of failure or compromise…. Or other degree of significance of the update as may be involved in calculating a potential level of risk level; in some embodiments. And as taught in Wang, it is industry standard to evaluate the confidentiality, integrity, and availability metric as part of the vulnerability assessment. These component scores are eventually combined into an adjusted base score (Wang, page 165). The Applicant argues that Wang fails to teach the different issue types where specific type of issues will impact the metrics/scores differently; however, the claim language uses the language of identifying “a confidentiality issue,” “an integrity issue,” and/or “an availability issue.” Because Wang teaches identifying vulnerability issues and assigning sub scores on confidentiality, integrity, and availability component, using the broadest reasonable interpretation, Wang does teach the claim limitation. Furthermore, because Wang teaches treating confidentiality, availability, and integrity as separate impacts by isolating them in separate, individual variables, changing the claim language to encompass “issue type” does not overcome this rejection. A person of ordinary skill in the art prior to the effective filing date of the claimed invention may label identified issues based on the impact dimension most strongly impacted, for example classifying issues with predominant confidentiality impact as confidentiality issue. As per claims 8 and 14, they recite similar claim language as that of claim 1 and thus are rejected for similar reasons. As per the dependent claims, they inherit similar qualities as that of their parent claims, and thus the dependent claims are also rejected for similar reasons. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made Claims 1, 3-21 are rejected under 35 U.S.C. 103 as being unpatentable over Rao (US Patent 11,947,946 B1), from henceforth referred to as Rao further in view of Wang (Environmental Metrics for Software Security Based on a Vulnerability Ontology, 2009), from henceforth referred to as Wang. Regarding claim 1, Rao teaches a system comprising: at least one processor (Fig 23, 2302, “Processing device”); and memory (Fig 23, 2304 “Main Memory”) storing instructions (Fig 23, 2326, “Instructions”) that, when executed by the at least one processor, causes the system to perform a set of operations (“The processing device 2302 may be configured to execute instructions 2326 for performing the operations and steps described herein (Col 23 Line 34-35)), the set of operations comprising: …a set of data sources…(col 10, line 9-13: data sources 560 may include an issue tracker, ticket system, etc), obtaining customer information indicating at least one of hardware or software, wherein the at least one of hardware or software has a version corresponding to the at least one of hardware or software (Rao teaches obtaining customer information through “updates” and receiving the customer information as part of the “parameter” associated with the said update: “to receive such updates, per 1802, processor 2302 may be configured to query or poll at least one service” (Col 17 Line 55-56) and “a set of parameters may be associated with any given update” (Col 18 Line 25-26). Additionally, "parameters and/or corresponding data may be derived from tags, properties, attributes, metrics, or analytics associated with..[…].. other mutable or immutable characteristics of a given component of a software deployment" (Col 19 Line 3-14). For instance, in Col 17 Line 63-Col 18 Line 7, Rao provides an example in which references to specific software packages that can be updated was stored in a data structure, so that these references can later be used to indicate a specific software version. Similarly, an indication of a specific version of a software can be stored in the parameter and passed along as part of an update); and providing an indication of the aggregated score for the version (Fig 11 “Total risk score” the total risk score that is considered an aggregate score is shown below to the user). PNG media_image1.png 678 1173 media_image1.png Greyscale Rao fails to teach generating, based on unstructured issue information from …, one or more issue data structures for an issue data store; identifying, from the issue data store, a set of issues associated with the version, wherein the set of issues includes a first issue that is one of confidentiality issue, an integrity issue, or an availability issue and a second issue that is another one of a confidentiality issue, an integrity issue, or an availability issue, the second issue thus having a different issue type than the first issue; generating an aggregated score for the version based on the set of issues associated with the version, wherein each of the first issue and the second issue has a respective score with which the aggregated score is generated However, Wang teaches generating, based on unstructured issue information …, one or more issue data structures for an issue data store (page 2, section 2 Evaluating Software Trustworthiness; second paragraph; the gathered vulnerabilities information is structured into the ontology called OVM, Ontology for Vulnerability Management); identifying, from the issue data store, a set of issues associated with the version, wherein the set of issues includes a first issue that is one of a confidentiality issue, an integrity issue, or an availability issue; and a second issue that is another one of a confidentiality issue, an integrity issue, or an availability issue, the second issue thus having a different issue type than the first issue; (page 8, section 1.1, issues associated with the specific version is retrieved and the issues are related to availability, confidentiality, and integrity as depicted in its metric: ConfImpact, IntegImpact, AvailImpact). generating an aggregated score for the version based on the set of issues associated with the version, wherein each of the first issue and the second issue has a respective score with which the aggregated score is generated (page 8, section 1.1-1.5; the EnvironmentalScore signifies the aggregate score that has been generated. This score is generated based off of each issue having a rating of either “P” for Partial, “N” for None, and “C” for Complete. The Examiner’s interpretation is that the P, N, and C can easily be substituted for a numerical score such as 1, 0, 2, respectively). Wang and Rao are analogous art that both teaches accessing data and displaying it to the users. Wang specifically teaches the method of retrieving data from NVD and using the CVSS scoring system. Rao is a system that has the capability of accessing data from multitude of external databases (not only NVD), but it makes sense for the system to use NVD as a data source for vulnerability data since, as stated in Wang, NVD provides “standardized information regarding existing vulnerabilities for most of the software products available today” (Wang, pg. 160). Similarly, it makes sense for Rao to use CVSS as a scoring tool when accessing the vulnerability data since the scoring tool is already integrated into the NVD, and integrated features and functionalities are often times more convenient to use. Because CVSS scores includes Confidentiality Impact (CI), Integrity Impact (II), and Availability Impact (AI) score components in its evaluations, it is reasonable to assume that the CVSS evaluates the issues based on categories such as confidentiality, integrity, and availability. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Rao to incorporate the teachings of Wang to use NVD as a way to identify a set of issues associated with the version, wherein the set of issues includes two or more of a confidentiality issue, an integrity issue, and an availability issue. Regarding Claim 3, Rao in view of Wang teaches the system of Claim 1. Rao further teaches wherein the aggregated score is generated based on one or more of: a confidentiality score component for the confidentiality issue; an integrity score component for the integrity issue; or an availability score component for the availability issue. (As mentioned earlier, security issues can be broken down into other categories such as confidentiality, integrity, and availability (Col 11 Line 1-8). The aggregate score is generated by "combining (e.g., adding, averaging, statistically representing, or otherwise summarizing) multiple evaluation results into one evaluation result" (Col 11, Line 9-11). PNG media_image2.png 353 1167 media_image2.png Greyscale Regarding Claim 4, Rao in view of Wang teaches the system of Claim 3. Rao further teaches wherein the aggregated score is generated based on a set of user-configured weights including at least one of: a first weight for the confidentiality score component; a second weight for the integrity score component; or a third weight for the availability score component. (Any of these calculated scores "may have predetermined or customized quantitative values or weights assigned for purposes of evaluating specific parameters at a given time of 1806" (Col 19 Line 15-27). See FIG 21. Furthermore, "such qualitative and/or quantitative values or weights may be processed accordingly via at least one algorithm to generate at least one evaluation result" (Col 19 Line 25-27)). Regarding Claim 5, Rao in view of Wang teaches the system of Claim 1. Wang further teaches wherein the set of issues relates to a vulnerability for the version and an operational defect for the version. (Wang, page 8, table for Vulnerabilities in case 1 for Mozilla Firefox 3, outlines an example in which the set of issues relate to vulnerability for the version, of which includes the impact to confidentiality, integrity, and availability; all of which can affect the operational effectiveness of the version. BRI of operational defect is any defect, problem, or issues within a system, application, or product that effects the operational effectiveness in a negative manner). Regarding Claim 6, Rao in view of Wang teaches the system of Claim 1. Rao further teaches wherein the set of data sources comprises two or more of: a vendor of the hardware or a software; a centralized data source; or a crowd-sourced data source (col 10, line 9-13: data sources 560 may include an issue tracker, ticket system, etc), Regarding Claim 7, Rao in view of Wang teaches the system of Claim 1. Rao further teaches wherein: the customer information (stored in “parameters”) is obtained as part of a request (“update”), from a computing device, for an issue score associated with the at least one of hardware or software (FIG 5 “Feedback (Communication) 540”, Col 9 Line 62-66: “Feedback 540 may include any of various forms of communication, such as messaging 542 between other tools…” Feedback can be used to facilitate the communication between the computing device and this invention in order to obtain the customer information from the specific request sent out by the computing device); and the set of operations further comprises providing the indication of the aggregated score (Fig 9: “Total Score”) for the version to the computing device in response to the request. (FIG 5 “Feedback (Communication) 540”, Col 9 Line 62-66: “Feedback 540 may include any of various forms of communication, such as messaging 542 between other tools…” Similarly, Feedback can be used to send back an indication of the resulting aggregated score back to the computing device). Regarding Claim 8, Rao teaches a system comprising: at least one processor (Fig 23, 2302, “Processing device”); and memory (Fig 23, 2304 “Main Memory”) storing instructions (Fig 23, 2326, “Instructions”) that, when executed by the at least one processor, causes the system to perform a set of operations, the set of operations comprising: receiving a recommendation request (a specific “update” (Col 16 Line 35-40) which may contain a “parameter” (Col 19 Line 3-14) that indicates the type of “update” as that of recommendation) comprising an indication of at least one of computer software or computer hardware (Rao teaches obtaining customer information through “updates” and receiving the customer information as part of the “parameter” associated with the said update: “to receive such updates, per 1802, processor 2302 may be configured to query or poll at least one service” (Col 17 Line 55-56) and “a set of parameters may be associated with any given update” (Col 18 Line 25-26). Additionally, "parameters and/or corresponding data may be derived from tags, properties, attributes, metrics, or analytics associated with..[…].. other mutable or immutable characteristics of a given component of a software deployment" (Col 19 Line 3-14). For instance, in Col 17 Line 63-Col 18 Line 7, Rao provides an example in which references to specific software packages that can be updated was stored in a data structure, so that these references can later be used to indicate a specific software version. Similarly, an indication of a specific version of a software can be stored in the parameter and passed along as part of an update); …a set of data sources…(col 10, line 9-13: data sources 560 may include an issue tracker, ticket system, etc); and providing an indication of a highest-ranked version from the ranked set of versions. (“Feedback 540 may include any of various forms of communication, such as messaging 542 between other tools or stages of DevSecOps pipeline 100 or DevSecOps architecture 500, and/or notifications 544 via channels for organizations or users.” Feedback can be used to communicate and provide the indication of a highest-ranking version to other tools or to users. (Col 9 Line 61-66)) Rao fails to teach identifying, based on the recommendation request, a set of versions that are each associated with an issue within an issue data store, wherein issues of the issue data store were generated based on unstructured issue information …; generating, for each version of the set of versions, an aggregated score based on a set of issues of the issue data store that are each associated with the version, wherein the set of issues comprises at least a first issue of a first issue type and a second issue of a second issue type different than the first issue type; ranking the set of versions based on an associated aggregated score for each version. However, Wang teaches identifying, based on the recommendation request, a set of versions that are each associated with an issue within an issue data store (Wang gives an example of populating a vulnerability ontology in which “software products that belong to the same product category” are grouped together by their object property ‘hasProductCategory’ or ‘hasProductInstance’ (Wang, pg.163). This allows for a mechanism to identify a set of versions of similar products (for instance, “Internet Explorer 7, Opera Browser 9, Apple Safari 4, etc”) from a given indication of a software version (for instance, “Mozilla Firefox 3”) (Wang, pg. 165), wherein issues of the issue data store were generated based on unstructured issue information … (page 2, section 2 Evaluating Software Trustworthiness; second paragraph; the gathered vulnerabilities information is structured into the ontology called OVM, Ontology for Vulnerability Management); PNG media_image3.png 308 649 media_image3.png Greyscale generating, for each version of the set of versions, an aggregated score based on a set of issues of the issue data store that are each associated with the version, wherein the set of issues comprises at least a first issue of a first issue type and a second issue of a second issue type different than the first issue type (Wang provides a calculation “to calculate the overall environment score for each product series,” in which “overall environment score” refers to the vulnerability score of the software version in a given environment (Wang, pg. 166)); ranking the set of versions based on an associated aggregated score for each version (Wang orders the set of versions based on an associated aggregated score: “the higher the overall score, the less secure the product with regard to the given environment. By comparing the overall EnvironmentScore of Mozilla Firefox 3 (8.3936) and Microsoft Internet Explorer 7 (9.0286), we can obtain a conclusion that, for our specific environment, Mozilla Firefox 3 is more secure than Microsoft Internet Explorer 7” (Wang, pg. 168) Wang and Rao are analogous art and both teach a system that displays relevant vulnerability data to the users. One of Rao’s data source options is a configuration management database (Rao, FIG. 5, “Configuration Management Database 566”) which can utilize ontology as a way to organize their managed software. Wang teaches a specific ontology in order to accomplish this, and by doing so, the user is able to identify a set of versions related to a given version. It is also worth noting that it makes sense – when given a set of versions – to provide an analysis for each version and ranking them in a specific order since the user is most likely interested in knowing which version is the best version to consider. Thus, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Rao to incorporate the teachings of Wang to add the ability to use ontology as a way to identify a set of related versions from a given version and generating, for each version, an aggregated score and ranking them based on an associated aggregated score for each version. Regarding Claim 9, Rao in view of Wang teaches the system of Claim 8. Rao further teaches wherein providing the indication of the highest-ranked version further comprises providing an indication of an aggregated score for the highest ranked version (“Insight 520 may reveal some or all of the inputs (e.g., updates and/or risk-based criteria), outputs, or intermediate representations” (Col 9 Line 42-45). And “Feedback 540 may include any of various forms of communications, such as messaging 542 between other tools or stages of DevOps pipeline 100 or DevSecOps architecture 500, and/or notifications 544 via channels for organizations or users” (Col 9 Line 62-66)). Regarding Claim 10, Rao in view of Wang teaches the system of Claim 8. Rao further teaches wherein providing the indication of the highest-ranked version further comprises providing an indication of a set of score components used to generate the aggregated score for the highest-ranked version. (“Insight 520 may reveal some or all of the inputs (e.g., updates and/or risk-based criteria), outputs, or intermediate representations” (Col 9 Line 42-45). And “Feedback 540 may include any of various forms of communications, such as messaging 542 between other tools or stages of DevOps pipeline 100 or DevSecOps architecture 500, and/or notifications 544 via channels for organizations or users” (Col 9 Line 62-66)). Regarding Claim 11, Rao in view of Wang teaches the system of Claim 8. Rao further teaches wherein generating the aggregated score for each version comprises: generating the aggregated score based on a set of user-configurable weights, wherein each weight of the set of user-configurable weights corresponds to a score component of the set of score components (Any of these calculated scores "may have predetermined or customized quantitative values or weights assigned for purposes of evaluating specific parameters at a given time of 1806" (Col 19 Line 15-27). See FIG 21. Furthermore, "such qualitative and/or quantitative values or weights may be processed accordingly via at least one algorithm to generate at least one evaluation result" (Col 19 Line 25-27)) Rao fails to teach determining set of score components comprising two or more of: a confidentiality score component for the version; an integrity score component for the version; and an availability score component for the version However, Wang teaches determining set of score components comprising two or more of: a confidentiality score component for the version; an integrity score component for the version; and an availability score component for the version (Wang utilizes data from NVD in order to identify vulnerability issues: “NVD could be automatically populated as an instance of the Vulnerability concept…[…]… NVD also integrates CVSS [1] as impact metrics…” (Wang, pg.162) and the CVSS base score includes Confidentiality Impact (CI), Integrity Impact (II), and Availability Impact (AI), which means the generated vulnerability issues regarding confidentiality, integrity, and availability (Wang, pg. 164)) Wang and Rao are analogous art that both teaches accessing data and displaying it to the users. Wang specifically teaches the method of retrieving data from NVD and using the CVSS scoring system. Rao is a system that has the capability of accessing data from multitude of external databases (not only NVD), but it makes sense for the system to use NVD as a data source for vulnerability data since, as stated in Wang, NVD provides “standardized information regarding existing vulnerabilities for most of the software products available today” (Wang, pg. 160). Similarly, it makes sense for Rao to use CVSS as a scoring tool when accessing the vulnerability data since the scoring tool is already integrated into the NVD, and integrated features and functionalities are often times more convenient to use. Because CVSS scores includes Confidentiality Impact (CI), Integrity Impact (II), and Availability Impact (AI) score components in its evaluations, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Rao to incorporate the teachings of Wang to add the CVSS as a way to determine set of score components including a confidentiality score component, integrity score component, and an availability score component for the version. Regarding Claim 12, Rao in view of Wang teaches the system of Claim 8. Rao further teaches wherein the indication of at least one of computer software or computer hardware is a first received indication and the set of operations further comprises: receiving a second indication to perform an action based on the provided indication ("In 1808, processor 2302 may be configured to perform at least one action of the set of actions in response to the update." (Col 20 Line 17-20)); and in response to the second received indication, automatically performing at least one action of: patching an instance of software; upgrading an instance of software; downgrading an instance of software; disabling a service; generating a knowledge article in a known error database comprising an indication to avoid functionality; or moving a workload to a different computing device. (“…the evaluation of elevated risk of compromise for a particular component may result in determining the set of actions that may be taken with respect to that same component, such as testing, patching, upgrading, omitting, substituting, or any combination thereof." (Col 20 Line 65-Col 21 Line 3); “the at least one action to be performed in 1808 may be selected… (Col 21, line 14-17) Regarding Claim 13, Rao in view of Wang teaches the system of Claim 12. Rao further teaches wherein the at least one action is performed in response to receiving approval from a user to perform the at least one action (FIG 13). PNG media_image4.png 650 971 media_image4.png Greyscale Regarding Claim 14, Rao teaches a method for managing at least one of hardware or software of an environment, the method comprising: …a set of data sources…(col 10, line 9-13: data sources 560 may include an issue tracker, ticket system, etc); receiving, from a computing device, a score request (an “update” for a score) for at least one of hardware or software of the environment, wherein the at least of hardware or software has a version corresponding to the at least one of hardware or software (Rao teaches obtaining customer information through “updates” and receiving the customer information as part of the “parameter” associated with the said update: “to receive such updates, per 1802, processor 2302 may be configured to query or poll at least one service” (Col 17 Line 55-56) and “a set of parameters may be associated with any given update” (Col 18 Line 25-26). Additionally, "parameters and/or corresponding data may be derived from tags, properties, attributes, metrics, or analytics associated with..[…].. other mutable or immutable characteristics of a given component of a software deployment" (Col 19 Line 3-14). For instance, in Col 17 Line 63-Col 18 Line 7, Rao provides an example in which references to specific software packages that can be updated was stored in a data structure, so that these references can later be used to indicate a specific software version. Similarly, an indication of a specific version of a software can be stored in the parameter and passed along as part of an update); and providing, to the computing device in response to the score request, an indication of the aggregated score for the version (“Feedback 540 may include any of various forms of communication, such as messaging 542 between other tools or stages of DevSecOps pipeline 100 or DevSecOps architecture 500, and/or notifications 544 via channels for organizations or users.” Feedback can be used to communicate and provide the indication of a highest-ranking version to other tools or to users). Rao fails to teach generating, within an issue data store, an issue data structure based on unstructured issue information…; identifying, from the issue data store, a set of issues associated with the version, wherein the set of issues includes a first issue that is one of a confidentiality issue, an integrity issue, and an availability issue; and a second issue that is another one of a confidentiality issue, an integrity issue, or an availability issue, the second issue thus having a different type than the first issue; generating an aggregated score for the version based on the set of issues associated with the version, wherein each of the first issue and the second has a respective score with which the aggregated score is generated However, Wang teaches generating, within an issue data store, an issue data structure based on unstructured issue information… (page 2, section 2 Evaluating Software Trustworthiness; second paragraph; the gathered vulnerabilities information is structured into the ontology called OVM, Ontology for Vulnerability Management); identifying, from the issue data store, a set of issues associated with the version, wherein the set of issues includes a first issue that is one of a confidentiality issue, an integrity issue, and an availability issue; and a second issue that is another one of a confidentiality issue, an integrity issue, or an availability issue, the second issue thus having a different type than the first issue (page 8, section 1.1, issues associated with the specific version is retrieved and the issues are related to availability, confidentiality, and integrity as depicted in its metric: ConfImpact, IntegImpact, AvailImpact) generating an aggregated score for the version based on the set of issues associated with the version, wherein each of the first issue and the second issue has a respective score with which the aggregated score is generated (page 8, section 1.1-1.5; the EnvironmentalScore signifies the aggregate score that has been generated. This score is generated based off of each issue having a rating of either “P” for Partial, “N” for None, and “C” for Complete. The Examiner’s interpretation is that the P, N, and C can easily be substituted for a numerical score such as 1, 0, 2, respectively). Wang and Rao are analogous art that both teaches accessing data and displaying it to the users. Wang specifically teaches the method of retrieving data from NVD and using the CVSS scoring system. Rao is a system that has the capability of accessing data from multitude of external databases (not only NVD), but it makes sense for the system to use NVD as a data source for vulnerability data since, as stated in Wang, NVD provides “standardized information regarding existing vulnerabilities for most of the software products available today” (Wang, pg. 160). Similarly, it makes sense for Rao to use CVSS as a scoring tool when accessing the vulnerability data since the scoring tool is already integrated into the NVD, and integrated features and functionalities are often times more convenient to use. Because CVSS scores includes Confidentiality Impact (CI), Integrity Impact (II), and Availability Impact (AI) score components in its evaluations, it is reasonable to assume that the CVSS evaluates the issues based on categories such as confidentiality, integrity, and availability. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Rao to incorporate the teachings of Wang to use NVD as a way to identify a set of issues associated with the version, wherein the set of issues includes two or more of a confidentiality issue, an integrity issue, and an availability issue. Regarding Claim 15, Rao in view of Wang teaches the method of Claim 14. Rao further teaches wherein providing the indication of the aggregated score further comprises providing an indication of at least one issue of the identified set of issues ((Col 12 Line 66-Col 13 Line 3: “In 1706, the at least one processor 2302 may be configured to create at least one new entry in a tracking system (e.g., bug tracking system or module), for example, corresponding to one or more items extracted/parsed in 1702 and/or 1704 as described above.” In this example, the issues identified can be sent out to other modules to be further processed). Regarding Claim 16, Rao in view of Wang teaches the method of Claim 14. Rao fails to teach wherein the aggregated score is generated based on one or more of: a confidentiality score component for the confidentiality issue; an integrity score component for the integrity issue; or an availability score component for the availability issue However, Wang teaches the method of claim 14, wherein the aggregated score is generated based on one or more of: a confidentiality score component for the confidentiality issue; an integrity score component for the integrity issue; or an availability score component for the availability issue (Wang utilizes data from NVD in order to identify vulnerability issues: “NVD could be automatically populated as an instance of the Vulnerability concept…[…]… NVD also integrates CVSS [1] as impact metrics…” (Wang, pg.162) and the CVSS base score includes Confidentiality Impact (CI), Integrity Impact (II), and Availability Impact (AI), which means the generated vulnerability issues regarding confidentiality, integrity, and availability (Wang, pg. 164)). Because CVSS scores includes Confidentiality Impact (CI), Integrity Impact (II), and Availability Impact (AI) score components in its evaluations, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Rao to incorporate the teachings of Wang to add the CVSS as a way to generate the aggregated score based off of a confidentiality score component, an integrity score component, and an availability score component. Regarding Claim 17, Rao in view of Wang teaches the method of Claim 16. Rao further teaches wherein the aggregated score is generated based on a set of user-configured weights including at least one of: a first weight for the confidentiality score component; a second weight for the integrity score component; or a third weight for the availability score component (Any of these calculated scores "may have predetermined or customized quantitative values or weights assigned for purposes of evaluating specific parameters at a given time of 1806" (Col 19 Line 15-27). See FIG 21. Furthermore, "such qualitative and/or quantitative values or weights may be processed accordingly via at least one algorithm to generate at least one evaluation result" (Col 19 Line 25-27)) PNG media_image5.png 187 264 media_image5.png Greyscale Regarding Claim 18, Rao in view of Wang teaches the method of Claim 14. Rao further teaches wherein the set of issues relates to a vulnerability for the version; and an operational defect for the version (Wang, page 8, table for Vulnerabilities in case 1 for Mozilla Firefox 3, outlines an example in which the set of issues relate to vulnerability for the version, of which includes the impact to confidentiality, integrity, and availability; all of which can affect the operational effectiveness of the version. BRI of operational defect is any defect, problem, or issues within a system, application, or product that effects the operational effectiveness in a negative manner). Regarding Claim 19, Rao in view of Wang teaches the method of Claim 14. Rao further teaches wherein the set of data sources comprises two or more of: a vendor of the hardware or software, a centralized data source, or a crowd-sourced data source (col 10, line 9-13: data sources 560 may include an issue tracker, ticket system, etc). Regarding Claim 20, Rao in view of Wang teaches the method of Claim 14. Rao further teaches wherein the score request is received as part of a change management process corresponding to the environment (“Based on these inputs to a policy-driven DevSecOps pipeline 100 that may include an intelligent risk-based engine with at least one adaptive pipeline module 400, and as a result of implementing the enhanced technology described further elsewhere herein, an intelligent DevSecOps workflow may be efficiently scaled to handle security-related issues safely and transparently for large-scale software deployments, even when their complexity grows beyond the scalability or capability of a team of any number of engineers, developer, or managers.” The request can be received as part of a highly-configurable, scalable deployment infrastructure which includes a change management process). Regarding Claim 21, Rao in view of Wang teaches the system of claim 1, wherein the set of data sources comprises: a first data source that is a vulnerability database; and a second data source comprising different unstructured issue information than the vulnerability database. (col 10, line 9-13: data sources 560 may include an issue tracker, ticket system, etc; and col 13, line 62-65, vulnerability remediation guidance can be provided from internal or external databases) Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. "Risk Assessment Software: manage your information security risk with Tandem" (Information Security Risk Assessment, Tandem App, captured on November 28th, 2021 via Wayback Machine. Accessed on January 2nd, 2026) teaches a commercial product that collects information about assets, threats, vulnerabilities, and uses CIA-based evaluation in order to provide a separate rating for confidentiality, integrity, and availability, as well as a combined CIA rating and an overall risk score. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAYO LISA RUSIN whose telephone number is (703)756-1679. The examiner can normally be reached Monday-Friday 8:30 - 5:00 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ashish Thomas can be reached at 571-272-0631. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.L.R./ Examiner, Art Unit 2114 /ASHISH THOMAS/Supervisory Patent Examiner, Art Unit 2114
Read full office action

Prosecution Timeline

Feb 17, 2023
Application Filed
Jun 17, 2024
Non-Final Rejection — §103
Sep 03, 2024
Interview Requested
Sep 17, 2024
Examiner Interview Summary
Sep 17, 2024
Applicant Interview (Telephonic)
Dec 20, 2024
Response Filed
Mar 18, 2025
Final Rejection — §103
Sep 22, 2025
Request for Continued Examination
Oct 01, 2025
Response after Non-Final Action
Jan 06, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591500
Event Monitoring and Code Autocorrecting Batch Processing System
2y 5m to grant Granted Mar 31, 2026
Patent 12579040
Optimized Snapshot Storage And Restoration Using An Offload Target
2y 5m to grant Granted Mar 17, 2026
Patent 12566670
SUPPORTING AUTOMATIC AND FAILSAFE BOOTING OF BMC AND BIOS FIRMWARE IN A CRITICAL SECURED SERVER SYSTEM
2y 5m to grant Granted Mar 03, 2026
Patent 12554601
ELECTRONIC APPARATUS AND CONTROL METHOD THEREROF FOR HANDLING A CEC MALFUNCTION
2y 5m to grant Granted Feb 17, 2026
Patent 12554609
DEVICES, METHODS, AND GRAPHICAL USER INTERFACES FOR PROVIDING ENVIRONMENT TRACKING CONTENT
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
91%
Grant Probability
99%
With Interview (+13.3%)
2y 3m
Median Time to Grant
High
PTA Risk
Based on 23 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month