Prosecution Insights
Last updated: April 19, 2026
Application No. 18/716,860

CYBERSECURITY STRATEGY ANALYSIS MATRIX

Non-Final OA §101§102§103§112
Filed
Jun 05, 2024
Examiner
RONI, SYED A
Art Unit
2432
Tech Center
2400 — Computer Networks
Assignee
Level 6 Holdings Inc.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
537 granted / 655 resolved
+24.0% vs TC avg
Strong +22% interview lift
Without
With
+22.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
26 currently pending
Career history
681
Total Applications
across all art units

Statute-Specific Performance

§101
14.5%
-25.5% vs TC avg
§103
33.1%
-6.9% vs TC avg
§102
31.1%
-8.9% vs TC avg
§112
10.9%
-29.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 655 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Authorization for Internet Communications The examiner encourages Applicant to submit an authorization to communicate with the examiner via the Internet by making the following statement (from MPEP 502.03): “Recognizing that Internet communications are not secure, I hereby authorize the USPTO to communicate with the undersigned and practitioners in accordance with 37 CFR 1.33 and 37 CFR 1.34 concerning any subject matter of this application by video conferencing, instant messaging, or electronic mail. I understand that a copy of these communications will be made of record in the application file.” Please note that the above statement can only be submitted via Central Fax (not Examiner's Fax), Regular postal mail, or EFS Web using PTO/SB/439. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 06/05/2024 is being considered by the examiner. Specification The disclosure is objected to because of the following informalities: The “Brief Description of the Drawings” is objected to because it does not include a description of each drawing. Appropriate correction is required. The specification is objected to as failing to provide proper antecedent basis for the claimed subject matter. See 37 CFR 1.75(d)(1) and MPEP § 608.01(o). Correction of the following is required: The examiner has carefully considered the applicant’s originally filed specification and respectfully notes that it fails to provide proper antecedent basis for the recitation “a tangible, non-transitory computer-readable medium storing executable instructions for predicting the time to replace one or more vehicle seats…” of claims 20. Furthermore, the applicant has not shown support for those features. Specification disclosed that a tangible, non-transitory computer-readable medium storing executable instructions for analyzing cyber security data (see Specification; para. 0118). However, the examiner points out that there is no disclosure of such within the specification nor is such shown within the applicant's drawings. Thus, the specification fails to provide antecedent basis for the claim recitations. Furthermore, the applicant has not pointed out where the claim is supported, thus failing to enable a reasonable interpretation of the claims. Appropriate correction is required. Claim Objections Claims 3, ,7, 10 and 20 are objected to because of the following informalities: Regarding claim 3; the limitation “the automatically retrieved data” lacks proper antecedent basis. Regarding claim 7; the limitation “the one or more statistical modeling algorithm” lacks proper antecedent basis. Regarding claim 10; the limitations “the percent rate of error”, “the ordinary least squares of the difference”, “the actual resulting output” and “the ordinary mean square” lack proper antecedent basis. Regarding claim 20; the limitation “the time to replace” lacks proper antecedent basis. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim 20 is rejected under 35 U.S.C. 112, first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor(s), at the time the application was filed, had possession of the claimed invention. Independent claim 20 recites “a tangible, non-transitory computer-readable medium storing executable instructions for predicting the time to replace one or more vehicle seats” which is inconsistent with the remainder of the claim and unsupported by the specification. Further, the specification as originally filed discloses a tangible, non-transitory computer-readable medium storing executable instructions for analyzing cybersecurity data (para 0118). Therefore, the specification as originally filed does not provide support for claim 20. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 – 20 are rejected under 25 U.S.C. 101 because the claimed invention is directed to judicial exception (an abstract idea) without significantly more. The following is Examiner’s analysis of the claimed invention. Claim 1 is directed to an abstract idea because the following claim limitations recite an abstract idea: A method for analyzing cybersecurity data, comprising: training a learning model using a first training dataset related to at least one area of interest of cybersecurity, (mental process: a human-being being trained with training materials that help the human-being make decisions using a decision model), the first training dataset comprising outcome information and one or more of: i) academic training data, ii) open Internet training data, or iii) corporate training data; (wherein the training materials comprise outcome information from one of the examples); storing the learning model (mental process: a human-being remembers the learning model); retrieving first collection data, the first collection data including one or more of academic data, open internet data, or corporate data, and the first collection of data is related to the at least one area of interest of cybersecurity (mental process: a human-being reading materials from one of these sources regarding the cybersecurity interest); analyzing, by using the stored learning model, the first collection of data (mental process: a human-being mentally considering the read data in light of the remembered decision model); and generating, based upon the analysis, a resulting output, the resulting output including one or more of: a strength of a cybersecurity strategy of an organization, a recommendation of a change to a cybersecurity strategy of an organization, or a predicated outcome given a cybersecurity strategy of an organization (mental process: the human-being determining a strength of a strategy based upon the analysis they just performed using the decision model and the information read from the various sources). Claim 1 recites the following additional elements: wherein the method is computer-implemented; wherein the steps are performed “by one or more processors”; wherein the learning model is a “machine learning model”. wherein the storing is “in one or more memories”. Step 2A, prong 2 The claims fail to achieve a technical solution to a technical problem. Thus, the claims fail to provide an improvement to the functioning of a computer or to a technology itself. The claims culminate with determining a strength, recommendation, or predicted outcome of a strategy. See MPEP 2106.04(d)(1) and 2106.05(a). The additional elements are recited at a high level of generality and amount to merely using computers as a tool to implement the abstract idea. Thus, the additional elements are considered mere instructions to apply the abstract idea. See MPEP 2106.05(f). Therefore, the examiner must find that the claims fail to integrate the abstract idea into a practical application. Step 2B, prong 2 Likewise, to step 2A, prong 2, the claims fail to achieve a technical solution to a technical problem. Thus, the claims fail to provide an improvement to the functioning of a computer or to a technology itself. The claims culminate with determining a strength, recommendation, or predicted outcome of a strategy. See MPEP 2106.04(d)(1) and 2106.05(a).). The additional elements are recited at a high level of generality and amount to merely using computers as a tool to implement the abstract idea. Thus, the additional elements are considered mere instructions to apply the abstract idea. See MPEP 2106.05(f). Therefore, the examiner must find that the claims fail to amount to significantly more than the abstract idea itself, even when the additional elements are considered alone and in combination with the abstract idea. Therefore, the claims are directed to an abstract idea without significantly more and are unpatentable. Claims 11 and 20 are a system and product claims of the abstract method claim 1above. Thus, the analysis and findings for abstract claim 1 applied to claims 11 and 20 respectively and also are directed to an abstract idea without significantly more and are unpatentable. Claims 2 and 12 recite the following additional elements: manually retrieving data; automatically collecting data. Step 2A, prong 2 The claims fail to achieve a technical solution to a technical problem. Thus, the claims fail to provide an improvement to the functioning of a computer or to a technology itself. The claims culminate with determining a strength, recommendation, or predicted outcome of a strategy. See MPEP 2106.04(d)(1) and 2106.05(a). The additional elements are recited at a high level of generality and amount to merely using computers as a tool to implement the abstract idea. The additional elements in the claim merely add pre-extra solution activity to the abstract idea. Thus, the additional elements are considered mere instructions to apply the abstract idea. See MPEP 2106.05(f). Therefore, the examiner must find that the claims fail to integrate the abstract idea into a practical application. Step 2B, prong 2 Likewise, to step 2A, prong 2, the claims fail to achieve a technical solution to a technical problem. Thus, the claims fail to provide an improvement to the functioning of a computer or to a technology itself. The claims culminate with determining a strength, recommendation, or predicted outcome of a strategy. See MPEP 2106.04(d)(1) and 2106.05(a).). The additional elements are recited at a high level of generality and amount to merely using computers as a tool to implement the abstract idea. The additional elements in the claim merely add pre-extra solution activity to the abstract idea. Thus, the additional elements are considered mere instructions to apply the abstract idea. See MPEP 2106.05(f). Therefore, the examiner must find that the claims fail to amount to significantly more than the abstract idea itself, even when the additional elements are considered alone and in combination with the abstract idea. Therefore, the claims are directed to an abstract idea without significantly more and are unpatentable. Claims 3 and 13 are directed to an abstract idea because the following claim limitations recite an abstract idea: automatically retrieving data using one or more artificial intelligent algorithm (mental process: a human-being reading materials from one of these sources regarding the cybersecurity interest with his/her own initiative). Claims 3 and 13 recite the following additional elements: Artificial intelligent algorithm. Step 2A, prong 2 Claims 3 and 13 fail to recite any new additional elements relative to claims 1 and 11. Thus, the analysis and findings for step 2A, prong 2 incorporates the analysis and findings of claims 1 and 11, however, the analysis and findings includes consideration of claims 3 and 13 as a whole. Step 2B, prong 2 Claims 3 and 13 fail to recite any new additional elements relative to claims 1 and 11. Thus, the analysis and findings for step 2A, prong 2 incorporates the analysis and findings of claims 1 and 11, however, the analysis and findings includes consideration of claims 3 and 13 as a whole. Therefore, the claims are directed to an abstract idea without significantly more and are unpatentable. Claims 4 and 14 recite the following additional elements: academic data includes peer-reviewed academic research; the open internet data includes one or more of one or more news sources, blogs, forum posts or social media sources; the corporate data includes anonymized corporate data or attribute corporate data. Step 2A, prong 2 The claims fail to achieve a technical solution to a technical problem. Thus, the claims fail to provide an improvement to the functioning of a computer or to a technology itself. The claims culminate with determining a strength, recommendation, or predicted outcome of a strategy. See MPEP 2106.04(d)(1) and 2106.05(a). The additional elements are recited at a high level of generality and amount to merely using computers as a tool to implement the abstract idea. The additional elements in the claim merely add pre-extra solution activity to the abstract idea. Thus, the additional elements are considered mere instructions to apply the abstract idea. See MPEP 2106.05(f). Therefore, the examiner must find that the claims fail to integrate the abstract idea into a practical application. Step 2B, prong 2 Likewise, to step 2A, prong 2, the claims fail to achieve a technical solution to a technical problem. Thus, the claims fail to provide an improvement to the functioning of a computer or to a technology itself. The claims culminate with determining a strength, recommendation, or predicted outcome of a strategy. See MPEP 2106.04(d)(1) and 2106.05(a).). The additional elements are recited at a high level of generality and amount to merely using computers as a tool to implement the abstract idea. The additional elements in the claim merely add pre-extra solution activity to the abstract idea. Thus, the additional elements are considered mere instructions to apply the abstract idea. See MPEP 2106.05(f). Therefore, the examiner must find that the claims fail to amount to significantly more than the abstract idea itself, even when the additional elements are considered alone and in combination with the abstract idea. Therefore, the claims are directed to an abstract idea without significantly more and are unpatentable. Claims 5 and 15 are directed to an abstract idea because the following claim limitation recites an abstract idea: The method of claim 1, wherein the first learning model includes one or more of a descriptive analysis algorithm, or predictive analysis algorithm (mental process: the human being’s learned decision model includes one or these algorithms). Claims 5 and 15 recite the following additional elements: Machine learning model. Step 2A, prong 2 Claims 5 and 15 fail to recite any new additional elements relative to claims 1 and 11. Thus, the analysis and findings for step 2A, prong 2 incorporates the analysis and findings of claims 1 and 11, however, the analysis and findings includes consideration of claims 3 and 13 as a whole. Step 2B, prong 2 Claims 5 and 15 fail to recite any new additional elements relative to claims 1 and 11. Thus, the analysis and findings for step 2A, prong 2 incorporates the analysis and findings of claims 1 and 11, however, the analysis and findings includes consideration of claims 3 and 13 as a whole. Therefore, the claims are directed to an abstract idea without significantly more and are unpatentable. Claims 6 - 7 and 16 are directed to an abstract idea because the following claim limitation recites an abstract idea: The method of claim 16, further comprising: analyzing…using statistical modeling algorithms stored in the one or more memories, the first collection of data, the statistical modeling algorithm includes a regression model (mental process: a human-being mentally considering the read data in light of the remembered decision model includes statistical modeling algorithm that is a regression model). Claims 6 - 7 and 16 recite the following additional elements: No new additional elements. Step 2A, prong 2 Claims 6 - 7 and 16 fail to recite any new additional elements relative to claims 1 and 11. Thus, the analysis and findings for step 2A, prong 2 incorporates the analysis and findings of claims 1 and 11, however, the analysis and findings includes consideration of claims 3 and 13 as a whole. Step 2B, prong 2 Claims 6 - 7 and 16 fail to recite any new additional elements relative to claims 1 and 11. Thus, the analysis and findings for step 2A, prong 2 incorporates the analysis and findings of claims 1 and 11, however, the analysis and findings includes consideration of claims 3 and 13 as a whole. Therefore, the claims are directed to an abstract idea without significantly more and are unpatentable. Claims 8 and 17 is directed to an abstract idea because the following claim limitation recite an abstract idea: The method of claim 1, wherein the area of interest of cybersecurity includes ransomware attacks, denial of service attacks, social engineering attacks, password attacks, cloud attacks, near misses or threat trends (mental process: a human-being being trained with training materials that is of interest to him/her include those attack types that help the human-being make decisions using a decision model). Claims 8 and 17 recite the following additional elements: No new additional elements. Step 2A, prong 2 Claims 8 and 17 fail to recite any new additional elements relative to claims 1 and 11. Thus, the analysis and findings for step 2A, prong 2 incorporates the analysis and findings of claims 1 and 11, however, the analysis and findings includes consideration of claims 3 and 13 as a whole. Step 2B, prong 2 Claims 8 and 17 fail to recite any new additional elements relative to claims 1 and 11. Thus, the analysis and findings for step 2A, prong 2 incorporates the analysis and findings of claims 1 and 11, however, the analysis and findings includes consideration of claims 3 and 13 as a whole. Therefore, the claims are directed to an abstract idea without significantly more and are unpatentable. Thus, the additional elements are considered mere instructions to apply the abstract idea. See MPEP 2106.05(f). Therefore, the examiner must find that the claims fail to amount to significantly more than the abstract idea itself, even when the additional elements are considered alone and in combination with the abstract idea. Therefore, the claims are directed to an abstract idea without significantly more and are unpatentable. Claims 9 and 18 is directed to an abstract idea because the following claim limitations recite an abstract idea: the method of claim 1, further comprising, comprising: Training a second learning model using a second training dataset related to one area of interest of cybersecurity (mental process: a human-being being trained with training materials that help the human-being make decisions using a decision model), the second training dataset comprising outcome information and one or more of: i) academic training data, ii) open Internet training data, or iii) corporate training data; (wherein the training materials comprise outcome information from one of the examples); storing the second learning model (mental process: a human-being remembers the learning model); identifying using the second learning model, a second collection data, the second collection data including academic data, open internet data, or corporate data and related to area of interest to cybersecurity (mental process: a human-being identifying materials according to a decision model from one of these sources regarding the cybersecurity interest). Claim 1 recites the following additional elements: wherein the method is computer-implemented; wherein the steps are performed “by one or more processors”; wherein the learning model is a “machine learning model”. wherein the storing is “in one or more memories”. Step 2A, prong 2 The claims fail to achieve a technical solution to a technical problem. Thus, the claims fail to provide an improvement to the functioning of a computer or to a technology itself. The claims culminate with determining a strength, recommendation, or predicted outcome of a strategy. See MPEP 2106.04(d)(1) and 2106.05(a). The additional elements are recited at a high level of generality and amount to merely using computers as a tool to implement the abstract idea. Thus, the additional elements are considered mere instructions to apply the abstract idea. See MPEP 2106.05(f). Therefore, the examiner must find that the claims fail to integrate the abstract idea into a practical application. Step 2B, prong 2 Likewise, to step 2A, prong 2, the claims fail to achieve a technical solution to a technical problem. Thus, the claims fail to provide an improvement to the functioning of a computer or to a technology itself. The claims culminate with determining a strength, recommendation, or predicted outcome of a strategy. See MPEP 2106.04(d)(1) and 2106.05(a).). The additional elements are recited at a high level of generality and amount to merely using computers as a tool to implement the abstract idea. Thus, the additional elements are considered mere instructions to apply the abstract idea. See MPEP 2106.05(f). Therefore, the examiner must find that the claims fail to amount to significantly more than the abstract idea itself, even when the additional elements are considered alone and in combination with the abstract idea. Therefore, the claims are directed to an abstract idea without significantly more and are unpatentable. Claims 10 and 19 are directed to an abstract idea because the following claim limitations recite an abstract idea: The method of claim 1, wherein: training the first learning model comprising: reducing, the percent rate of errors of generating the resulting outcome by calculating (i) the ordinary least squares of the differences between the generated resulting outcome and the actual resulting outcome of the first training data set or (ii) the ordinary mean square of an aggregation of results between the generated resulting output and the actual resulting output of the first training data set (mental process: simply recite mathematical steps that a human can do in their head or via pen and paper); and generating a confidence interval based upon (i) the generated resulting output, (ii) the actual resulting output of the first training data set, or (iii) one or more standard deviation from the aggregated result (mental process: simply recite mathematical steps that a human can do in their head or via pen and paper). Claims 10 and 19 recite the following additional elements: wherein the method is computer-implemented; wherein the steps are performed “by one or more processors”. Step 2A, prong 2 The claims fail to achieve a technical solution to a technical problem. Thus, the claims fail to provide an improvement to the functioning of a computer or to a technology itself. The claims culminate with determining a strength, recommendation, or predicted outcome of a strategy. See MPEP 2106.04(d)(1) and 2106.05(a). The additional elements are recited at a high level of generality and amount to merely using computers as a tool to implement the abstract idea. Thus, the additional elements are considered mere instructions to apply the abstract idea. See MPEP 2106.05(f). Therefore, the examiner must find that the claims fail to integrate the abstract idea into a practical application. Step 2B, prong 2 Likewise, to step 2A, prong 2, the claims fail to achieve a technical solution to a technical problem. Thus, the claims fail to provide an improvement to the functioning of a computer or to a technology itself. The claims culminate with determining a strength, recommendation, or predicted outcome of a strategy. See MPEP 2106.04(d)(1) and 2106.05(a).). The additional elements are recited at a high level of generality and amount to merely using computers as a tool to implement the abstract idea. Thus, the additional elements are considered mere instructions to apply the abstract idea. See MPEP 2106.05(f). Therefore, the examiner must find that the claims fail to amount to significantly more than the abstract idea itself, even when the additional elements are considered alone and in combination with the abstract idea. Therefore, the claims are directed to an abstract idea without significantly more and are unpatentable. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1 – 9, 11 – 18, and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by MO et al., (US 2019/0034845 A1) (hereinafter “MO”). MO discloses; Regarding claim 1, a computer-implemented method for analyzing cybersecurity data, comprising: training, by one or more processors [i.e., processor 103 (see figure 1), (page 3, para 0029)], a first machine learning model using a first training dataset [i.e., a supervised machine learning model is trained (page 2, para 0022) i.e., learn during a training period using training data (page 4, para 0034) i.e., company attribute information can be input to attribute module 105 during training period (page 4, para 0035)] related to at least one area of interest of cybersecurity, the first training dataset comprising outcome information [i.e., data feeds of real-time reports on data breaches are analyzed, cybersecurity features most relevant to recent breach scenarios are identified, and a probability of a catastrophic breach occurring is predicted based on the prevalence of the identified cybersecurity feature (page 2, para 0023) i.e., the result of learning (page 4, para 0035)] and one or more of: (i) academic training data [i.e., technical and non-technical data (page 3, para 0032)], (ii) open internet training data [i.e., scraping online information from websites and news source (page 3, para 0032) i.e., data feeds of real-time reports on data breach (page 2, para 0023)], or (iii) corporate training data [i.e., company attribute information (page 4, para 0035) i.e., attribute of a company can be proprietary, technical, and non-technical data relating to a company…scraping data from corporate filings (page 3, para 0032)]; storing, by the one or more processors [i.e., processor 103 (see figure 1), (page 3, para 0029)], the first machine learning model in one or more memories [i.e., in operation, system 100 can “learn” during a training period using training data to build a supervised training model and the result of the learning i.e., the supervised training model is then used to monitor whether new data exhibits the same pattern, categories, statistical relationship (page 4, para 0035) i.e., memory 104 stores modules (para 0029), (see figure 1) Note; the supervised model must be stored in memory]; retrieving, by the one or more processors, a first collection of data [i.e., new data (page 4, para 0035) i.e., company attribute information can be input to attribute module 105 during analysis period (page 4, para 0035), (see figure 1) i.e., data feeds of real-time reports on data breaches (page 2, para 0023)], the first collection of data including one or more of academic data [i.e., technical and non-technical data (page 3, para 0032)], open internet data [i.e., scraping online information from websites and news source (page 3, para 0032) i.e., data feeds of real-time reports on data breach (page 2, para 0023)], or corporate data [i.e., company attribute information (page 4, para 0035) i.e., attribute of a company can be proprietary, technical, and non-technical data relating to a company…scraping data from corporate filings (page 3, para 0032)], and the first collection of data is related to the at least one area of interest of cybersecurity [i.e., data breaches (page 2, para 0023); analyzing, by the one or more processors using the first machine learning model stored in the one or more memories, the first collection of data [i.e., a machine learning model is utilized to identify the most significant cybersecurity event…(page 2, para 0022) i.e., data feeds of real-time reports on data breaches are analyzed…a probability of a catastrophic breach occurring …is predicted…(page 2, para 0023) i.e., the result of the leaning is then used to monitor whether new data exhibits the same patterns, categories, statistical relationships (page 4, para 0035)]; and generating, by the one or more processors based upon the analysis, a resulting output [i.e., output of Cybersecurity Risk Level Module 108 (page 5, para 0051), (see figure 1) i.e., Cybersecurity Risk Level Module 108 quantifies a portfolio’s cybersecurity risk level based on the multiplier generated by multiplier module 107 (page 5, para 0049), (see figure 1) i.e., the multiplier is generated from data gathered using machine learning techniques discussed herein (page 7, para 0076)], the resulting output including one or more of: a strength of a cybersecurity strategy of an organization [i.e., utilize a machine learning model to quantify a portfolio’s cybersecurity risk (page 4, para 0034) i.e., correlate a portfolio’s risk of experiencing an adverse cybersecurity event (page 2, para 0023) i.e., company’s cybersecurity posture (page 2, para 0024)], a recommendation of a change to a cybersecurity strategy of an organization [i.e., output of Cybersecurity Risk Level Module 108 can be utilized by action module 109 to generate steps that, if executed, will change the portfolio’s cybersecurity risk level (page 5, para 0051), (see figure 1)], or a predicted outcome given a cybersecurity strategy of an organization [i.e., output of Cybersecurity Risk Level Module 108 (page 5, para 0051), (see figure 1) i.e., correlate a portfolio’s risk of experiencing an adverse cybersecurity event (page 2, para 0023)]. Regarding claim 2, the method of claim 1, wherein the first collection of data includes one or more of automatically retrieved data [i.e., data feeds of real-time reports on data breaches (page 2, para 0023), (page 3, para 0032) i.e., data paths i.e., a real-time processing path and a batch processing path (page 4, para 0039)]. Regarding claim 3, the method of claim 1, wherein the automatically retrieved data is retrieved using one or more artificial intelligence algorithms [i.e., data feeds of real-time reports on data breaches (page 2, para 0023), (page 3, para 0032) i.e., data paths i.e., a real-time processing path and a batch processing path (page 4, para 0039) i.e., machine leaning data handling (page 4, para 0035) i.e., an automated predictive model first processes raw supervised training data (page 4, para 0038)]. Regarding claim 4, the method of claim 1, wherein: (i) the academic data includes peer-reviewed academic research [i.e., technical and non-technical data (page 3, para 0032)]; (ii) the open internet data includes one or more of one or more news sources, one or more blogs, one or more forum posts, or one or more social media sources [i.e., scraping online information from websites and news source (page 3, para 0032) i.e., data feeds of real-time reports on data breach (page 2, para 0023)]; and(iii) the corporate data includes one or more of anonymized corporate data or attributed corporate data [i.e., company attribute information (page 4, para 0035) i.e., attribute of a company can be proprietary, technical, and non-technical data relating to a company…scraping data from corporate filings (page 3, para 0032)]. Regarding claim 5, the method of claim 1, wherein the first machine learning model includes one or more of a descriptive analysis algorithm [i.e., a machine learning model is utilized to identify the most significant cybersecurity events and the most significant intercedences between companies to predict an occurrence of a cybersecurity risk event (para 0022) i.e., a probability of a catastrophic breach…is predicted (0023) i.e., statistical model can be trained…to fit a Bayesian model of likelihood of multiple cybersecurity event…an estimate of the risk that multiple companies will experience (para 0076 – 0077)] or a predictive analysis algorithm [i.e., analyzing degrees of dependency between a company that experienced a cybersecurity event and companies in a portfolio (0021) i.e., system 100 can learn during a training period by identifying pattern, category, statistical relationship exhibited by training data (para 0035, 0038 and 0049 - 0051]. Regarding claim 6, the method of claim 1, further comprising: analyzing, by the one or more processors using one or more statistical modeling algorithms stored in the one or more memories, the first collection of data [i.e., a statistical model can be trained (page 7, para 0076)]. Regarding claim 7, the method of claim 1, wherein the one or more statistical modeling algorithms include a regression model [i.e., a Bayesian model of likelihood (page 7, para 0076)]. Regarding claim 8, the method of claim 1, wherein the at least one area of interest of cybersecurity includes one or more of: ransomware attacks [i.e., catastrophic breaches, or significant cybersecurity events (page 2, para 0023), denial of service attacks [i.e., data breaches or cybersecurity event (page 2, para 0023)], social engineering attacks [i.e., i.e., shard attributes (para 0045) i.e., data breaches or cybersecurity event (page 2, para 0023)], password attacks [i.e., authentication features (para 0073)], cloud attacks[i.e., data breaches or cybersecurity event (page 2, para 0023)], near misses [i.e., data breaches or cybersecurity event (page 2, para 0023)], or threat trends [i.e., data breaches or cybersecurity event (page 2, para 0023)]. Regarding claim 9, the method of claim 1, further comprising: training, by the one or more processors, a second machine learning model using a second training dataset related to at least one area of interest of cybersecurity [i.e., a supervised machine learning model is trained (page 2, para 0022) i.e., learn during a training period using training data (page 4, para 0034) i.e., company attribute information can be input to attribute module 105 during training period (page 4, para 0035)], the second training dataset comprising outcome information and one or more of: (i) the academic training data [i.e., technical and non-technical data (page 3, para 0032)], (ii) open internet training data [i.e., scraping online information from websites and news source (page 3, para 0032) i.e., data feeds of real-time reports on data breach (page 2, para 0023)], or (iii) corporate training data [i.e., company attribute information (page 4, para 0035) i.e., attribute of a company can be proprietary, technical, and non-technical data relating to a company…scraping data from corporate filings (page 3, para 0032)]; storing, by the one or more processors, the second machine learning model in the one or more memories [i.e., in operation, system 100 can “learn” during a training period using training data to build a supervised training model and the result of the learning i.e., the supervised training model is then used to monitor whether new data exhibits the same pattern, categories, statistical relationship (page 4, para 0035) i.e., memory 104 stores modules (para 0029), (see figure 1) Note; the supervised model must be stored in memory]; and identifying, by the one or more processors using the second machine learning model stored in the one or more memories, a second collection of data [i.e., new data (page 4, para 0035) i.e., company attribute information can be input to attribute module 105 during analysis period (page 4, para 0035), (see figure 1) i.e., data feeds of real-time reports on data breaches (page 2, para 0023)], the second collection of data including one or more of academic data [i.e., technical and non-technical data (page 3, para 0032)], (ii) open internet training data [i.e., scraping online information from websites and news source (page 3, para 0032) i.e., data feeds of real-time reports on data breach (page 2, para 0023)], or (iii) corporate training data [i.e., company attribute information (page 4, para 0035) i.e., attribute of a company can be proprietary, technical, and non-technical data relating to a company…scraping data from corporate filings (page 3, para 0032)]. Regarding claim 11, a computer system for analyzing cybersecurity data [i.e., a system 100 which quantifies a cybersecurity risk level of a portfolio of companies (page 3, para 0026), (see figure 1)], comprising: one or more processors [i.e., the system comprises a processor 103 (page 3, para 0029), (see figure 1)]; one or more non-transitory program memories coupled to the one or more processors [i.e., the system comprises a memory 104 coupling to the processor (page 3, para 0029), (see figure 1)] and storing executable instructions that, when executed by the one or more processors, cause the computer system to [i.e., memory 104 stores executable instructions to perform by the processor to perform the following steps (para 0029), (see figure 1)]: train a first machine learning model using a first training dataset [i.e., a supervised machine learning model is trained (page 2, para 0022) i.e., learn during a training period using training data (page 4, para 0034) i.e., company attribute information can be input to attribute module 105 during training period (page 4, para 0035)] related to at least one area of interest of cybersecurity, the first training dataset comprising outcome information [i.e., data feeds of real-time reports on data breaches are analyzed, cybersecurity features most relevant to recent breach scenarios are identified, and a probability of a catastrophic breach occurring is predicted based on the prevalence of the identified cybersecurity feature (page 2, para 0023) i.e., the result of learning (page 4, para 0035)] and one or more of: (i) academic training data [i.e., technical and non-technical data (page 3, para 0032)], (ii) open internet training data [i.e., scraping online information from websites and news source (page 3, para 0032) i.e., data feeds of real-time reports on data breach (page 2, para 0023)], or (iii) corporate training data [i.e., company attribute information (page 4, para 0035) i.e., attribute of a company can be proprietary, technical, and non-technical data relating to a company…scraping data from corporate filings (page 3, para 0032)]; store the first machine learning model in one or more memories [i.e., in operation, system 100 can “learn” during a training period using training data to build a supervised training model and the result of the learning i.e., the supervised training model is then used to monitor whether new data exhibits the same pattern, categories, statistical relationship (page 4, para 0035) i.e., memory 104 stores modules (para 0029), (see figure 1) Note; the supervised model must be stored in memory]; retrieve a first collection of data [i.e., new data (page 4, para 0035) i.e., company attribute information can be input to attribute module 105 during analysis period (page 4, para 0035), (see figure 1) i.e., data feeds of real-time reports on data breaches (page 2, para 0023)], the first collection of data including one or more of academic data [i.e., technical and non-technical data (page 3, para 0032)], open internet data [i.e., scraping online information from websites and news source (page 3, para 0032) i.e., data feeds of real-time reports on data breach (page 2, para 0023)], or corporate data i.e., company attribute information (page 4, para 0035) i.e., attribute of a company can be proprietary, technical, and non-technical data relating to a company…scraping data from corporate filings (page 3, para 0032)], and the first collection of data is related to the at least one area of interest of cybersecurity [i.e., data breaches (page 2, para 0023); analyze using the first machine learning model stored in the one or more memories, the first collection of data [i.e., a machine learning model is utilized to identify the most significant cybersecurity event…(page 2, para 0022) i.e., data feeds of real-time reports on data breaches are analyzed…a probability of a catastrophic breach occurring …is predicted…(page 2, para 0023) i.e., the result of the leaning is then used to monitor whether new data exhibits the same patterns, categories, statistical relationships (page 4, para 0035)]; and generate based upon the analysis, a resulting output [i.e., output of Cybersecurity Risk Level Module 108 (page 5, para 0051), (see figure 1) i.e., Cybersecurity Risk Level Module 108 quantifies a portfolio’s cybersecurity risk level based on the multiplier generated by multiplier module 107 (page 5, para 0049), (see figure 1) i.e., the multiplier is generated from data gathered using machine learning techniques discussed herein (page 7, para 0076)], the resulting output including one or more of: a strength of a cybersecurity strategy of an organization [i.e., utilize a machine learning model to quantify a portfolio’s cybersecurity risk (page 4, para 0034) i.e., correlate a portfolio’s risk of experiencing an adverse cybersecurity event (page 2, para 0023)], a recommendation of a change to a cybersecurity strategy of an organization [i.e., output of Cybersecurity Risk Level Module 108 can be utilized by action module 109 to generate steps that, if executed, will change the portfolio’s cybersecurity risk level (page 5, para 0051), (see figure 1) i.e., company’s cybersecurity posture (page 2, para 0024)], or a predicted outcome given a cybersecurity strategy of an organization [i.e., output of Cybersecurity Risk Level Module 108 (page 5, para 0051), (see figure 1) i.e., correlate a portfolio’s risk of experiencing an adverse cybersecurity event (page 2, para 0023)]. Regarding claim 12, the system of claim 11, wherein the first collection of data includes one or more of automatically retrieved data [i.e., data feeds of real-time reports on data breaches (page 2, para 0023), (page 3, para 0032) i.e., data paths i.e., a real-time processing path and a batch processing path (page 4, para 0039)]. Regarding claim 13, the system of claim 11, wherein the automatically retrieved data is retrieved using one or more artificial intelligence algorithms [i.e., data feeds of real-time reports on data breaches (page 2, para 0023), (page 3, para 0032) i.e., data paths i.e., a real-time processing path and a batch processing path (page 4, para 0039) i.e., machine leaning data handling (page 4, para 0035) i.e., an automated predictive model first processes raw supervised training data (page 4, para 0038)]. Regarding claim 14, the system of claim 11, wherein: (i) the academic data includes peer-reviewed academic research [i.e., technical and non-technical data (page 3, para 0032)]; (ii) the open internet data includes one or more of one or more news sources, one or more blogs, one or more forum posts, or one or more social media sources [i.e., scraping online information from websites and news source (page 3, para 0032) i.e., data feeds of real-time reports on data breach (page 2, para 0023)]; and(iii) the corporate data includes one or more of anonymized corporate data or attributed corporate data [i.e., company attribute information (page 4, para 0035) i.e., attribute of a company can be proprietary, technical, and non-technical data relating to a company…scraping data from corporate filings (page 3, para 0032)]. Regarding claim 15, the system of claim 11, wherein the first machine learning model includes one or more of a descriptive analysis algorithm [i.e., a machine learning model is utilized to identify the most significant cybersecurity events and the most significant intercedences between companies to predict an occurrence of a cybersecurity risk event (para 0022) i.e., a probability of a catastrophic breach…is predicted (0023) i.e., statistical model can be trained…to fit a Bayesian model of likelihood of multiple cybersecurity event…an estimate of the risk that multiple companies will experience (para 0076 – 0077)] or a predictive analysis algorithm [i.e., analyzing degrees of dependency between a company that experienced a cybersecurity event and companies in a portfolio (0021) i.e., system 100 can learn during a training period by identifying pattern, category, statistical relationship exhibited by training data (para 0035, 0038 and 0049 - 0051]. Regarding claim 16, the system of claim 11, wherein the executable instructions, when executed by the one or more processors, further cause the computer system to: analyze, using one or more statistical modeling algorithms stored in the one or more non-transitory program memories, the first collection of data [i.e., a statistical model can be trained (page 7, para 0076)], the one or more statistical modeling algorithms include a regression model [i.e., a Bayesian model of likelihood (page 7, para 0076)]. Regarding claim 17, the system of claim 11, wherein the at least one area of interest of cybersecurity includes one or more of: ransomware attacks [i.e., catastrophic breaches, or significant cybersecurity events (page 2, para 0023), denial of service attacks [i.e., data breaches or cybersecurity event (page 2, para 0023)], social engineering attacks [i.e., i.e., shard attributes (para 0045) i.e., data breaches or cybersecurity event (page 2, para 0023)], password attacks [i.e., authentication features (para 0073)], cloud attacks[i.e., data breaches or cybersecurity event (page 2, para 0023)], near misses [i.e., data breaches or cybersecurity event (page 2, para 0023)], or threat trends [i.e., data breaches or cybersecurity event (page 2, para 0023)]. Regarding claim 18, the system of claim 11, wherein the executable instructions, when executed by the one or more processors, further cause the computer system to: train a second machine learning model using a second training dataset related to at least one area of interest of cybersecurity [i.e., a supervised machine learning model is trained (page 2, para 0022) i.e., learn during a training period using training data (page 4, para 0034) i.e., company attribute information can be input to attribute module 105 during training period (page 4, para 0035)], the second train dataset comprising outcome information and one or more of: (i) the academic training data [i.e., technical and non-technical data (page 3, para 0032)], (ii) open internet training data [i.e., scraping online information from websites and news source (page 3, para 0032) i.e., data feeds of real-time reports on data breach (page 2, para 0023)], or (iii) corporate training data [i.e., company attribute information (page 4, para 0035) i.e., attribute of a company can be proprietary, technical, and non-technical data relating to a company…scraping data from corporate filings (page 3, para 0032)]; store the second machine learning model in the one or more memories [i.e., in operation, system 100 can “learn” during a training period using training data to build a supervised training model and the result of the learning i.e., the supervised training model is then used to monitor whether new data exhibits the same pattern, categories, statistical relationship (page 4, para 0035) i.e., memory 104 stores modules (para 0029), (see figure 1) Note; the supervised model must be stored in memory]; and identify using the second machine learning model stored in the one or more memories, a second collection of data [i.e., new data (page 4, para 0035) i.e., company attribute information can be input to attribute module 105 during analysis period (page 4, para 0035), (see figure 1) i.e., data feeds of real-time reports on data breaches (page 2, para 0023)], the second collection of data including one or more of academic data [i.e., technical and non-technical data (page 3, para 0032)], (ii) open internet training data [i.e., scraping online information from websites and news source (page 3, para 0032) i.e., data feeds of real-time reports on data breach (page 2, para 0023)], or (iii) corporate training data [i.e., company attribute information (page 4, para 0035) i.e., attribute of a company can be proprietary, technical, and non-technical data relating to a company…scraping data from corporate filings (page 3, para 0032)]. Regarding claim 20, a tangible, non-transitory computer-readable medium storing executable instructions [i.e., memory 104 stores executable instructions to perform by the processor to perform the following steps (para 0029), (see figure 1)] for predicting the time to replace one or more vehicle seats, the instructions, when executed by one or more processors of a computer system, cause the computer system to: train a first machine learning model using a first training dataset [i.e., a supervised machine learning model is trained (page 2, para 0022) i.e., learn during a training period using training data (page 4, para 0034) i.e., company attribute information can be input to attribute module 105 during training period (page 4, para 0035)] related to at least one area of interest of cybersecurity, the first training dataset comprising outcome information [i.e., data feeds of real-time reports on data breaches are analyzed, cybersecurity features most relevant to recent breach scenarios are identified, and a probability of a catastrophic breach occurring is predicted based on the prevalence of the identified cybersecurity feature (page 2, para 0023) i.e., the result of learning (page 4, para 0035)] and one or more of: (i) academic training data [i.e., technical and non-technical data (page 3, para 0032)], (ii) open internet training data [i.e., scraping online information from websites and news source (page 3, para 0032) i.e., data feeds of real-time reports on data breach (page 2, para 0023)], or (iii) corporate training data [i.e., company attribute information (page 4, para 0035) i.e., attribute of a company can be proprietary, technical, and non-technical data relating to a company…scraping data from corporate filings (page 3, para 0032)]; store the first machine learning model in one or more memories [i.e., in operation, system 100 can “learn” during a training period using training data to build a supervised training model and the result of the learning i.e., the supervised training model is then used to monitor whether new data exhibits the same pattern, categories, statistical relationship (page 4, para 0035) i.e., memory 104 stores modules (para 0029), (see figure 1) Note; the supervised model must be stored in memory]; retrieve a first collection of data [i.e., new data (page 4, para 0035) i.e., company attribute information can be input to attribute module 105 during analysis period (page 4, para 0035), (see figure 1) i.e., data feeds of real-time reports on data breaches (page 2, para 0023)], the first collection of data including one or more of academic data [i.e., technical and non-technical data (page 3, para 0032)], open internet data [i.e., scraping online information from websites and news source (page 3, para 0032) i.e., data feeds of real-time reports on data breach (page 2, para 0023)], or corporate data i.e., company attribute information (page 4, para 0035) i.e., attribute of a company can be proprietary, technical, and non-technical data relating to a company…scraping data from corporate filings (page 3, para 0032)], and the first collection of data is related to the at least one area of interest of cybersecurity [i.e., data breaches (page 2, para 0023); analyze using the first machine learning model stored in the one or more memories, the first collection of data [i.e., a machine learning model is utilized to identify the most significant cybersecurity event…(page 2, para 0022) i.e., data feeds of real-time reports on data breaches are analyzed…a probability of a catastrophic breach occurring …is predicted…(page 2, para 0023) i.e., the result of the leaning is then used to monitor whether new data exhibits the same patterns, categories, statistical relationships (page 4, para 0035)]; and generate based upon the analysis, a resulting output [i.e., output of Cybersecurity Risk Level Module 108 (page 5, para 0051), (see figure 1) i.e., Cybersecurity Risk Level Module 108 quantifies a portfolio’s cybersecurity risk level based on the multiplier generated by multiplier module 107 (page 5, para 0049), (see figure 1) i.e., the multiplier is generated from data gathered using machine learning techniques discussed herein (page 7, para 0076)], the resulting output including one or more of: a strength of a cybersecurity strategy of an organization [i.e., utilize a machine learning model to quantify a portfolio’s cybersecurity risk (page 4, para 0034) i.e., correlate a portfolio’s risk of experiencing an adverse cybersecurity event (page 2, para 0023)], a recommendation of a change to a cybersecurity strategy of an organization [i.e., output of Cybersecurity Risk Level Module 108 can be utilized by action module 109 to generate steps that, if executed, will change the portfolio’s cybersecurity risk level (page 5, para 0051), (see figure 1) i.e., company’s cybersecurity posture (page 2, para 0024)], or a predicted outcome given a cybersecurity strategy of an organization [i.e., output of Cybersecurity Risk Level Module 108 (page 5, para 0051), (see figure 1) i.e., correlate a portfolio’s risk of experiencing an adverse cybersecurity event (page 2, para 0023)]. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 10 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over MO in view of Sabes et al., (US 11,817,214 B1) (hereinafter “Sabes”). Regarding claim 10, MO discloses; the method of claim 1 [i.e., (see claim 1 above)]. MO does not disclose; wherein: training the first machine learning model comprises: reducing, by the one or more processors, the percent rate of error of generating the resulting output by calculating one or more of: (i) the ordinary least squares of the difference between the generated resulting output and the actual resulting output of the first training data set, or (ii) the ordinary mean square of an aggregation of results between the generated resulting output and the actual resulting output of the first training data set; and generating, by the one or more processors, a confidence interval based upon one or more of: (i) the generated resulting output, (ii) the actual resulting output of the first training data set, and/or (iii) one or more standard deviations from the aggregated result. However, Sabes discloses; wherein: training the first machine learning model comprises: reducing, by the one or more processors, the percent rate of error of generating the resulting output [i.e., determining loss/error based on the difference between an estimate and a label and updating model parameters to reduce the error (col. 21, line 16 – 27)] by calculating one or more of: (i) the ordinary least squares of the difference between the generated resulting output and the actual resulting output of the first training data set [i.e., ordinary least squares regression (OLSR) (col. 20, lines 60 – 65)], or (ii) the ordinary mean square of an aggregation of results between the generated resulting output and the actual resulting output of the first training data set [i.e., mean squared error (L2 loss) as a loss function used during model training (col. 21, lines 27 – 34)]; and generating, by the one or more processors, a confidence interval based upon one or more of: (i) the generated resulting output, (ii) the actual resulting output of the first training data set, and/or (iii) one or more standard deviations from the aggregated result [i.e., ML outputs including a confidence and/or confidence interval, including embodiments with separate output nodes for prediction and confidence (col. 9, lines 8 – 35)]. Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to modify teachings of MO by adapting the teachings of Sabes to improve the accuracy, reliability, and interpretability of the machine learning model (See Sabes; col. 1, lines 51 – 61). Regarding claim 19, MO discloses; the system of claim 11 [i.e., (see claim 11 above)]. MO does not disclose; wherein: training the first machine learning model comprises: reducing, by the one or more processors, the percent rate of error of generating the resulting output by calculating one or more of: (i) the ordinary least squares of the difference between the generated resulting output and the actual resulting output of the first training data set, or (ii) the ordinary mean square of an aggregation of results between the generated resulting output and the actual resulting output of the first training data set; and generating, by the one or more processors, a confidence interval based upon one or more of: (i) the generated resulting output, (ii) the actual resulting output of the first training data set, and/or (iii) one or more standard deviations from the aggregated result. However, Sabes discloses; wherein: training the first machine learning model comprises: reducing, by the one or more processors, the percent rate of error of generating the resulting output [i.e., determining loss/error based on the difference between an estimate and a label and updating model parameters to reduce the error (col. 21, line 16 – 27)] by calculating one or more of: (i) the ordinary least squares of the difference between the generated resulting output and the actual resulting output of the first training data set [i.e., ordinary least squares regression (OLSR) (col. 20, lines 60 – 65)], or (ii) the ordinary mean square of an aggregation of results between the generated resulting output and the actual resulting output of the first training data set [i.e., mean squared error (L2 loss) as a loss function used during model training (col. 21, lines 27 – 34)]; and generating, by the one or more processors, a confidence interval based upon one or more of: (i) the generated resulting output, (ii) the actual resulting output of the first training data set, and/or (iii) one or more standard deviations from the aggregated result [i.e., ML outputs including a confidence and/or confidence interval, including embodiments with separate output nodes for prediction and confidence (col. 9, lines 8 – 35)]. Before the effective filing date of the claimed invention it would have been obvious to a person of ordinary skill in the art to modify teachings of MO by adapting the teachings of Sabes to improve the accuracy, reliability, and interpretability of the machine learning model (See Sabes; col. 1, lines 51 – 61). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SYED A RONI whose telephone number is (571)270-7806. The examiner can normally be reached M-F 9:00-5:00 pm (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jeffrey L Nickerson can be reached at (469) 295-9235. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SYED A RONI/Primary Examiner, Art Unit 2432
Read full office action

Prosecution Timeline

Jun 05, 2024
Application Filed
Jan 30, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591684
CENTRALIZED SECURITY ANALYSIS AND MANAGEMENT OF SOURCE CODE IN NETWORK ENVIRONMENTS
2y 5m to grant Granted Mar 31, 2026
Patent 12574354
CLIENT FILTER VPN
2y 5m to grant Granted Mar 10, 2026
Patent 12572379
Static Trusted Execution Environment for Inter-Architecture Processor Program Compatibility
2y 5m to grant Granted Mar 10, 2026
Patent 12561420
SYSTEM AND METHOD FOR AUTHENTICATING USERS VIA PATTERN BASED DIGITAL RESOURCES ON A DISTRIBUTED DEVELOPMENT PLATFORM
2y 5m to grant Granted Feb 24, 2026
Patent 12547760
METHOD FOR EVALUATING THE RISK OF RE-IDENTIFICATION OF ANONYMISED DATA
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+22.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 655 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month