DETAILED ACTION
This Final Office Action is in response to amendment filed on 10/17/2025. Claims 1, 5, 7-8, and 12 have been amended. Claims 1-13 remain pending in the application.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
The drawings filed on 05/22/2023 are accepted.
Response to Amendment
Applicant’s amendments to the Specification has overcome the objection previously set forth in the Non-Final Office Action mailed on 07/17/2025.
Response to Arguments
Applicant stated in Pages 8-10 “In contrast, Shakhzadyan performs static vulnerability assessment and attack-tree scoring based on configuration data and likelihood metrics. While Shakhzadyan discusses "impact” Examiner respectfully disagrees and further iterates that teaching of scores," these are derived from threat likelihood and remediation cost-not from measurable deviations in system behavior, component-level mapping, or AI/ML-driven prediction of operational degradation. Importantly, Shakhzadyan does not measure any change in operational performance, nor does it establish a baseline operational model against which deviations are detected or forecast future degradation using a trained predictive model. The claimed invention, by contrast, is directed to real-world operational impact prediction rather than retrospective vulnerability scoring. Similarly, claim 7, as amended, now requires generation and ranking of impact determination groups based on data-driven associations between attack and consequence groups using models trained on historical operational-impact data. Shakhzadyan contains no teaching or suggestion of forming such data-driven impact-determination groups or applying predictive scoring to forecast operational outcomes. Claim 8 likewise recites a system configured to receive operational state or behavioral information, detect measurable deviations, map those deviations to physical components, and generate ranked operational-impact events using predictive modeling. Such capabilities are absent from Shakhzadyan's static risk-ranking server. The same argument pertaining to deviation from baseline was recited with respect to the USC 103 rejection.
Examiner submits that Shakhzadyan discloses in e.g. Col. 9 line 63-67 and Col. 10 line 1-5 “…the analytic server may use physics-based models to determine whether data reported by ADS-B (automatic dependent surveillance-broadcast) transponders or weather stations are consistent with other reports, or whether it is aberrant enough to warrant investigation…”, where the analytic server determines whether reported data deviates from consistent and what is considered operational model. Shakhzadyan discloses using aggregation rules and aggregation functions. Simanovsky is relies upon to explicitly disclose in [0030-0033] that classifier learning is utilized. Claim 8 recites similar limitation to claim 1, therefore, the rationale above apply to claim 8. With respect to claim 7, Sorani discloses ranking list of threats associated with attack and impacts/consequences. Simanovsky is relies upon to explicitly disclose in [0030-0033] observed data 104 from entities 102, where trained classifier learning is utilized to determine impact scores based on historical data. Please see detailed rejection below. Please see detailed rejection below.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 3, 6, 8-10 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Shakhzadyan (US 11444974 B1) in view of Simanovsky (US 20200382534 A1).
Regarding claim 1. Shakhzadyan teaches a method for assessing an impact of a cyber-attack on a physical, operational system by one or more processors of a cyber-attack assessment system (Shakhzadyan Abstract “The analytic server may generate reports comprising a list of the prioritized attacks and recommendation actions to mitigate the attacks.”, Col. 1 line 54-56 “What is therefore desired is to have a system that builds threat modeling tools in cyber-physical systems that analyze and prioritize the impact of physical attacks.”), comprising:
receiving information indicative of an operational state or behavior of one or more physical components of the operational system (Shakhzadyan Col. 4 line 1-8 “…the analytic server may detect vulnerabilities due to misconfiguration, hardware lifecycle attack threats based on the data provided by the circuit datasheets, physical access attacks such as BadUSB, radio based spoofing attacks; as well as a whole host of threats through more conventional communication links (e.g., Ethernet), Col. 9 line 63-67 and Col. 10 line 1-5 “The analytic server may monitor various devices of the plurality of cyber-physical systems connected with each other within the distributed network infrastructure. For example, the analytic server may use physics-based models to determine threats and attacks on the physical devices within the plurality of cyber-physical systems. The analytic server may monitor and collect the testable results of the physical devices, which correspond to the leaf nodes of the attack tree.”, where the analytic server monitors and receives testable results/information/data indicating the state/ behavior of devices in the system);
determining at least one observable impact of a cyber-attack on an operation of the operational system (Shakhzadyan Col. 4 line 1-8 “…the analytic server may detect vulnerabilities due to misconfiguration, hardware lifecycle attack threats based on the data provided by the circuit datasheets, physical access attacks such as BadUSB, radio based spoofing attacks; as well as a whole host of threats through more conventional communication links (e.g., Ethernet). Aside from the actual vulnerabilities, the report may include threat likelihood, impact, and remediation costs. The report may also include possible mitigation and further testing suggestions (e.g., in cases of hardware lifecycle attacks).”),
wherein the observable impact comprises a measurable deviation in a performance parameter of the operational system relative to a baseline operational model (Shakhzadyan Col. 9 line 60-65 and Col. 10 line 7-15 “…the analytic server may monitor systems and receive electronic notifications of alerts in real-time from a plurality of devices …the analytic server may use physics-based models to determine whether data reported by ADS-B (automatic dependent surveillance-broadcast) transponders or weather stations are consistent with other reports, or whether it is aberrant enough to warrant investigation. Automatic dependent surveillance-broadcast (ADS-B) is a surveillance technology in which an aircraft determines its position via satellite navigation and periodically broadcasts it, enabling it to be tracked.”, where observable/monitoring impact comprises monitoring an aberrant/deviation from consistent, i.e. baseline, reports that describe the operation of the and aircraft in determining its position via satellite navigation and periodically broadcasting and enabling it to be tracked, further in Col. 13 line 30-34 “ The function hooking and active monitor module 412 may monitor the inputs and outputs of functions to determine whether the inputs and outputs are consistent with the cyber-physical system configuration.”);
mapping the at least one observable impact to one or more physical components of the operational system based on the received operational state information (Shakhzadyan Col. 11 line 20-24 “…determine the security attacks (e.g., threat likelihood) on various nodes of the attack tree, the analytic server may also determine the impacts of the attacks on the various nodes, and the costs to remedy the impacts on the various nodes.”, Col. 12 line 49-50, “In the process of traversing the attack tree from the bottom up, the analytic server may determine the threat likelihood, impact, remediation cost on each node of the attack tree.”, where the observable/monitored impact is based on received data provided indicating the operational state information as disclosed in e.g. Col. 9 line 63-67 and Col. 10 line 1-5 and Col. 11 line 10-13, 20-24, 58-61);
determining an impact prediction score for the at least one observable impact, wherein the score is indicative of [[an]] a predictive impact of the cyber-attack [[to]] on operation of the one or more components of the operational system (Shakhzadyan Col. 11 line 10-13, 20-24, 58-61 “The analytic server may determine an impact score for each of the one or more attacks by correlating physical configuration data of the plurality of cyber-physical systems…determine the security attacks (e.g., threat likelihood) on various nodes of the attack tree, the analytic server may also determine the impacts of the attacks on the various nodes, and the costs to remedy the impacts on the various nodes.…After the analytic server detects the one or more attacks in the distributed network infrastructure of the cyber-physical systems, the analytic server may rank and prioritize the one or more attacks based on the impact scores.”), and
[wherein the impact prediction score is determined using a predictive model comprising one or more of an artificial-intelligence model, a machine- learning model, a statistical model, or a neural network;]
generating a ranked list of cyber-attack impact events based on the corresponding impact prediction scores ranked list to a user through a graphical or analytical interface (Shakhzadyan Col. 11 line 63-65 “The analytic server may display the reports in a dashboard of a user interface based on the ranking. The reports in the dashboard may comprise the list of the prioritized attacks.”, Col. 12 line 49-50, 58-63 “In the process of traversing the attack tree from the bottom up, the analytic server may determine the threat likelihood, impact, remediation cost on each node of the attack tree. …As shown in the figure, the report 324 may comprise a list of action items for improving the system security. For example, the report 324 may include possible mitigation and further testing suggestions. The list of action items may be ordered by the impact and/or cost.”, where the impact is determined by an impact score, as disclosed in Col. 11 line 10-25 “The analytic server may determine an impact score for each of the one or more attacks by correlating physical configuration data of the plurality of cyber-physical systems (including the first and second cyber-physical systems). For example, the analytic server may correlate context and configuration data from disparate cyber-physical systems and determine overall system risk and impact. The analytic server may not only determine if the combination of correlated data indicates an attack, but also how much of an impact the attack might have on the distributed network infrastructure. In addition to using the attack tree to determine the security attacks (e.g., threat likelihood) on various nodes of the attack tree, the analytic server may also determine the impacts of the attacks on the various nodes, and the costs to remedy the impacts on the various nodes.”).
Shakhzadyan discloses in e.g. Col. 5 line 23-27 “The analytic server 102 may build a security application 116 by using an attack tree model based on a set of aggregation rules, which dictate how various metrics are computed in terms of lower-level data. In the security application 116, the analytic server 102 may support a large set of aggregation functions, and the user can define custom functions if needed. The analytic server 102 may refine the interface for aggregation functions and provide a set of aggregators specific to assessing real-time threat indicator data. The results of the aggregation rules can be in standard form such as National Institute of Standards and Technology (NIST) Common Vulnerability Scoring System (CVSS) vectors or costs, or in mission domain-specific terms. As data arrives, the metrics will be recomputed in real-time, “bubbling up” the tree as appropriate.” However, Shakhzadyan does not explicitly disclose learning.
Simanovsky discloses the impact prediction score is determined using a predictive model comprising one or more of an artificial-intelligence model, a machine- learning model, a statistical model, or a neural network (Simanovsky [0030-0033] illustrates in Figure 1 observed data 104 from entities 102, where classifier learning is utilized to determine impact scores).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Shakhzadyan to incorporate the teaching of Simanovsky to utilize the above feature, with the motivation of determining assessing risks, as recognized by utilizing learning techniques (Simanovsky Abstract and throughout).
Regarding claim 8, claim 8 recites similar limitations to claim 1, therefore, rejected with the same rationale applied to claim 1.
Regarding claim 3, Shakhzadyan in view of Simanovsky teaches the method of claim 1, wherein the one or more components comprises at least one of: a physical device; a network interface; a network; a data packet; a data object; a data protocol; and a software application (Shakhzadyan Col. 12 line 49-50, “In the process of traversing the attack tree from the bottom up, the analytic server may determine the threat likelihood, impact, remediation cost on each node of the attack tree. ”).
Regarding claim 10, Shakhzadyan in view of Simanovsky teaches claim 10 recites similar limitations to claim 3, therefore, rejected with the same rationale applied to claim 3.
Regarding claim 6, Shakhzadyan in view of Simanovsky teaches the method of claim 1.
Shakhzadyan discloses in e.g. Col. 5 line 23-27 “The analytic server 102 may build a security application 116 by using an attack tree model based on a set of aggregation rules, which dictate how various metrics are computed in terms of lower-level data. In the security application 116, the analytic server 102 may support a large set of aggregation functions, and the user can define custom functions if needed. The analytic server 102 may refine the interface for aggregation functions and provide a set of aggregators specific to assessing real-time threat indicator data. The results of the aggregation rules can be in standard form such as National Institute of Standards and Technology (NIST) Common Vulnerability Scoring System (CVSS) vectors or costs, or in mission domain-specific terms. As data arrives, the metrics will be recomputed in real-time, “bubbling up” the tree as appropriate.” However, Shakhzadyan does not explicitly disclose learning.
Simanovsky discloses wherein the impact prediction score is determined using a process comprising one or more of: artificial intelligence; machine learning; mathematical modeling; statistical modeling; and a neural network (Simanovsky [0030-0033] illustrates in Figure 1 observed data 104 from entities 102, where classifier learning is utilized to determine impact scores).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Shakhzadyan to incorporate the teaching of Simanovsky to utilize the above feature, with the motivation of determining assessing risks, as recognized by utilizing learning techniques (Simanovsky Abstract and throughout).
Regarding claim 13, claim 13 recites similar limitations to claim 6, therefore, rejected with the same rationale applied to claim 6.
Regarding claim 9, Shakhzadyan in view of Simanovsky teaches the system of claim 8, wherein the determining at least one observable impact comprises determining an impact to a mission of the operational system (Shakhzadyan Col. 12 line 49-50, “In the process of traversing the attack tree from the bottom up, the analytic server may determine the threat likelihood, impact, remediation cost on each node of the attack tree.”)
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Sorani (US 20220394053 A1) in view of Simanovsky (US 20200382534 A1).
Regarding claim 7, Sorani teaches a method for assessing an impact of a cyber-attack on an operational system by one or more processors of a cyber-attack assessment system (Sorani Abstract “The disclosure relates to systems and methods for determining a cyber risk level of an asset node associated with one or more functional aspects of a vehicle and assessing a node vulnerability score. Specifically, the disclosure relates to systems and methods of identifying, analyzing, and remediating vulnerabilities of networked vehicle components to various malicious exploits, by simulating attack on one or more vehicle nodes using known vulnerabilities under operational conditions.”, [0028] discloses assessing the impact of attacks), comprising:
receiving cyber threat data, wherein cyber threat data indicates aspects of one or more cyber threats (Sorani discloses receiving data to be analyzed for threats in [0029-0036, 0045]);
grouping the one or more cyber threats into at least one of an attack group and at least one [[a]] consequence group based on the cyber threat data and detected observable impacts, wherein a cyber threat is grouped into at least a consequence group [[if]] when the cyber threat data or operational telemetry indicates an observable consequence, and into an attack group when the cyber-threat data indicates an attack characteristic (Sorani discloses in [0045] different attack types “…generating and, using the display module-presenting a sorted (e.g., by criticality index) list of threats associated with the plurality of units determined to be compromised in simulation 350 according to at least one of: an attack type, an attack vector, attack surface, impact on privacy, impact on operational safety, deviation from a regulation (e.g., ISO/IEC/SAE 21434, UNECE WP.29 GRVA), compromise level of the plurality of units determined to be compromised in the simulation, and a criticality of components affected by the simulated attack. The attack type can be, for example, an unintended data disclosure, a denial of service (DoS), a remote code execution (RCE), unauthorized privilege association (PE), and a combination comprising one or more of the foregoing.”, [0059] further discloses different attack types and different impacts to the attack types “That revised, ranked score 345 is cross referenced 518 against the relevant attack vectors 305, yielding 509 a revised exposure heat map 320, illustrating the critical ECUs weighted by the relevant attack vector and the attack type. Using the revised exposure map 320, as well as damage class table 360 provided 503 by the asset module 100 backend management server 130, to the system impact calculator 307, all the processed data is rendered 511 onto a single dashboard 350, with a ranked list of the top damage class score (e.g., safety, financial, legal, quality, reputation), top vulnerability, and top vectors. The dashboard can further provide the recommendation for remediation and once selected, iteratively, recalculate the overall system impact.”, [0071] “…associating a set of vulnerabilities with a respective set of severities; and based on the associated vulnerabilities and their corresponding severity, generating and presenting a list of sorted attack types for at least one of the plurality of units to a user, (xii) wherein the severity is increased if the vulnerability is exploitable”);
associating the consequence group with at least one system impact category, wherein the at least one system impact category is indicative of a direct impact of at least one cyber threat within the consequence group on at least one component of the system (Sorani impact category interpreted as safety, financial, legal, quality, reputation in [0059]);
generating a plurality of impact-determination groups, each associating an attack group with a corresponding consequence group (Sorani [0045] different attack types “…generating and, using the display module-presenting a sorted (e.g., by criticality index) list of threats associated with the plurality of units determined to be compromised in simulation 350 according to at least one of: an attack type, an attack vector, attack surface, impact on privacy, impact on operational safety, deviation from a regulation (e.g., ISO/IEC/SAE 21434, UNECE WP.29 GRVA), compromise level of the plurality of units determined to be compromised in the simulation, and a criticality of components affected by the simulated attack. The attack type can be, for example, an unintended data disclosure, a denial of service (DoS), a remote code execution (RCE), unauthorized privilege association (PE), and a combination comprising one or more of the foregoing.”, where each threat associated with its corresponding attack type, impacts, etc. is considered a group);
determining an impact-prediction score for each impact-determination group [using a model trained on historical operational-impact data]; and ranking the impact-determination groups within a list of predicted impact events based on the respective impact-prediction scores ([0059] “…all the processed data is rendered 511 onto a single dashboard 350, with a ranked list of the top damage class score (e.g., safety, financial, legal, quality, reputation), top vulnerability, and top vectors. The dashboard can further provide the recommendation for remediation and once selected, iteratively, recalculate the overall system impact.”).
Shakhzadyan discloses in e.g. Col. 5 line 23-27 “The analytic server 102 may build a security application 116 by using an attack tree model based on a set of aggregation rules, which dictate how various metrics are computed in terms of lower-level data. In the security application 116, the analytic server 102 may support a large set of aggregation functions, and the user can define custom functions if needed. The analytic server 102 may refine the interface for aggregation functions and provide a set of aggregators specific to assessing real-time threat indicator data. The results of the aggregation rules can be in standard form such as National Institute of Standards and Technology (NIST) Common Vulnerability Scoring System (CVSS) vectors or costs, or in mission domain-specific terms. As data arrives, the metrics will be recomputed in real-time, “bubbling up” the tree as appropriate.” However, Sorani does not explicitly disclose learning.
Simanovsky discloses determining an impact-prediction score for each impact-determination group using a model trained on historical operational-impact data
(Simanovsky [0030-0033] illustrates in Figure 1 observed data 104 from entities 102, where classifier learning is utilized to determine impact scores based on historical data).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Sorani to incorporate the teaching of Simanovsky to utilize the above feature, with the motivation of determining assessing risks, as recognized by utilizing learning techniques (Simanovsky Abstract and throughout).
Claims 2, 4-5, 11-12 are rejected under 35 U.S.C. 103 as being unpatentable over Shakhzadyan (US 11444974 B1) in view of Simanovsky and Sorani (US 20220394053 A1).
Regarding claim 2, Shakhzadyan in view of Simanovsky teaches the method of claim 1, wherein the determining at least one observable impact comprises determining an impact to a mission of the operational system (Shakhzadyan Col. 12 line 49-50, “In the process of traversing the attack tree from the bottom up, the analytic server may determine the threat likelihood, impact, remediation cost on each node of the attack tree.”, where the desired mission is prevention of data leaks as disclosed in Col. 2 line 1.).
Shakhzadyan in view of Simanovsky does not explicitly discloses the below limitation.
Sorani discloses wherein the mission comprises at least one of: an operational status of the system, an efficiency rating of the system, a desired output of the system, and an availability time of the system (Sorani discloses desired output of e.g. safety as disclosed in e.g. [0059, 0071]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Shakhzadyan in view of Simanovsky to incorporate the teaching of Sorani to utilize the above feature, with the motivation of remediating vulnerabilities , as recognized by (Sorani Abstract and throughout).
Regarding claim 4, Shakhzadyan in view of Simanovsky teaches the method of claim 1.
Shakhzadyan in view of Simanovsky does not explicitly disclose the below limitation.
Sorani discloses further comprising an impacted components list for the operational system, wherein the impacted components are ranked within the impacted components list based on a prioritization of the components, and wherein the prioritization of the components is based on the impact prediction score associated with the at least one observable impact (Sorani [0045] “The methods provided herein can further comprise generating and, using the display module-presenting a sorted (e.g., by criticality index) list of threats associated with the plurality of units determined to be compromised in simulation 350 according to at least one of: an attack type, an attack vector, attack surface, impact on privacy, impact on operational safety, deviation from a regulation (e.g., ISO/IEC/SAE 21434, UNECE WP.29 GRVA), compromise level of the plurality of units determined to be compromised in the simulation, and a criticality of components affected by the simulated attack.”, [0059] further discloses heatmap illustrating critical ECUs weighted by the relevant attack, further in [0043, 0057, 0059, 0071]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Shakhzadyan in view of Simanovsky to incorporate the teaching of Sorani to utilize the above feature, with the motivation of remediating vulnerabilities , as recognized by (Sorani Abstract and throughout).
Regarding claim 11, claim 11 recites similar limitations to claim 4, therefore, rejected with the same rationale applied to claim 4.
Regarding claim 5, Shakhzadyan in view of Simanovsky teaches the method of claim 1, further comprising: receiving cyber threat data, wherein cyber threat data indicates aspects of one or more cyber threats (Shakhzadyan discloses receiving data to be analyzed for threats in e.g. Col. 3 line 35-36 and Col. 4 line 1-8, Sorani further discloses receiving data to be analyzed for threats in [0029-0036, 0045]).
Shakhzadyan in view of Simanovsky does not explicitly disclose the below limitations.
Sorani discloses grouping the one or more cyber threats into at least one of an attack group and a consequence group based on the cyber threat data, wherein a cyber threat is grouped into at least a consequence group if the cyber threat data indicates a consequence associated with the cyber threat, and wherein the cyber threat is grouped into at least an attack group if the cyber threat data indicates an attack type associated with the cyber threat (Sorani discloses in [0045] different attack types, [0059] further discloses different attack types and different impacts to the attack types “That revised, ranked score 345 is cross referenced 518 against the relevant attack vectors 305, yielding 509 a revised exposure heat map 320, illustrating the critical ECUs weighted by the relevant attack vector and the attack type. Using the revised exposure map 320, as well as damage class table 360 provided 503 by the asset module 100 backend management server 130, to the system impact calculator 307, all the processed data is rendered 511 onto a single dashboard 350, with a ranked list of the top damage class score (e.g., safety, financial, legal, quality, reputation), top vulnerability, and top vectors. The dashboard can further provide the recommendation for remediation and once selected, iteratively, recalculate the overall system impact.”, [0071] “…associating a set of vulnerabilities with a respective set of severities; and based on the associated vulnerabilities and their corresponding severity, generating and presenting a list of sorted attack types for at least one of the plurality of units to a user, (xii) wherein the severity is increased if the vulnerability is exploitable”);
associating the consequence group with at least one system impact category, wherein the at least one system impact category is indicative of a direct impact of at least one cyber threat within the consequence group on at least one component of the system (Sorani impact category interpreted as safety, financial, legal, quality, reputation in [0059]);
generating a list of impact determination groups, wherein each impact determination group is based on an association between the consequence group and the at least one attack group; associating the list of impact determination groups with at least one observable impact; determining an impact prediction score for each impact determination group in the list, wherein the impact prediction score is based on the at least one observable impact; and determining an impact prediction based on each impact prediction score (Sorani [0059] “…a single dashboard 350, with a ranked list of the top damage class score (e.g., safety, financial, legal, quality, reputation)”, [0071] “…associating a set of vulnerabilities with a respective set of severities; and based on the associated vulnerabilities and their corresponding severity, generating and presenting a list of sorted attack types for at least one of the plurality of units to a user, (xii) wherein the severity is increased if the vulnerability is exploitable”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Shakhzadyan in view of Simanovsky to incorporate the teaching of Sorani to utilize the above feature, with the motivation of remediating vulnerabilities , as recognized by (Sorani Abstract and throughout).
Regarding claim 12, claim 12 recites similar limitations to claim 5, therefore, rejected with the same rationale applied to claim 5.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BASSAM A NOAMAN whose telephone number is (571)272-2705. The examiner can normally be reached Monday-Friday 8:30 AM-5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Eleni A. Shiferaw can be reached at (571) 272-3867. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BASSAM A NOAMAN/Primary Examiner, Art Unit 2497