Prosecution Insights
Last updated: April 19, 2026
Application No. 18/116,404

SYSTEMS AND METHODS FOR CREATING AND COMMISSIONING A SECURITY AWARENESS PROGRAM

Non-Final OA §101§103§DP
Filed
Mar 02, 2023
Examiner
LEE, PO HAN
Art Unit
3623
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Knowbe4 Inc.
OA Round
3 (Non-Final)
32%
Grant Probability
At Risk
3-4
OA Rounds
3y 6m
To Grant
74%
With Interview

Examiner Intelligence

Grants only 32% of cases
32%
Career Allow Rate
51 granted / 158 resolved
-19.7% vs TC avg
Strong +41% interview lift
Without
With
+41.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
50 currently pending
Career history
208
Total Applications
across all art units

Statute-Specific Performance

§101
40.9%
+0.9% vs TC avg
§103
31.3%
-8.7% vs TC avg
§102
11.4%
-28.6% vs TC avg
§112
14.8%
-25.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 158 resolved cases

Office Action

§101 §103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Status of the Application The following is a non-Final Office Action. In response to Examiner's communication of 8/27/2025, Applicant responded on 11/17/2025. Amended claims 1, 11. Claims 1-20 are pending in this application and have been examined. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/17/2025 has been entered. Response to Amendment Applicant's amendments to claims 1, 11 are not sufficient to overcome the 101 rejections set forth in the previous action. Applicant's amendments to claims 1, 11 are not sufficient to overcome the prior art rejections set forth in the previous action. Response to Arguments – Double Patenting Applicant’s arguments with respect to the rejections have been fully considered, but they are not persuasive. Applicant submits, “Applicant requests the Examiner to hold this rejection in abeyance until allowable subject matter is identified in the present application.” In response, Examiner notes, MPEP § 804 states: A complete response to a nonstatutory double patenting (NDP) rejection is either a reply by applicant showing that the claims subject to the rejection are patentably distinct from the reference claims or the filing of a terminal disclaimer in accordance with 37 CFR 1.321 in the pending application(s) with a reply to the Office action (see MPEP § 1490 for a discussion of terminal disclaimers). Such a response is required even when the nonstatutory double patenting rejection is provisional. As filing a terminal disclaimer, or filing a showing that the claims subject to the rejection are patentably distinct from the reference application’s claims, is necessary for further consideration of the rejection of the claims, such a filing should not be held in advance. Only objections or requirements as to form not necessary for further consideration of the claims may be held in abeyance until allowable subject matter is indicated. (emphasis added). Since Applicant’s response to the nonstatutory double patenting rejection has not shown that the claims subject to the rejection is patentably distinct from the reference claims or the filing of a terminal disclaimer, as required by the MPEP, the nonstatutory double patenting rejection is maintained below. Response to Arguments – 35 USC § 101 Applicant’s arguments with respect to the rejections have been fully considered, but they are not persuasive. Applicant submits, “…the Examiner cannot maintain the above elements as mental processes. There are non-mental technical implementations of: (i) simulated phishing communications being sent to devices and being interacted electronically by users and (ii) an electronic calendar with an improved graphical user interface (representation) to dynamically display the progress of (i). Because the Claims as a whole are not directed to a mental process and thus not a judicial exception, the eligibility analysis should stop here and the Claims determined to be patent eligible.…These Claims improve the functionality of the technology of electronic calendars integrated with simulated phishing communication technology, thus improving both of these technologies themselves as well as the functioning of the computer…Claims effect a transformation of a particular article to a different state of thing as the Claims transform data from execution of the simulated phishing campaigns (or the alleged observations or evaluations) to a different state or thing by a graphic representation in a user interface of an electronic calendar that provides an innovative way to visualize and represent such data from the simulated phishing technology. In view of these additional elements of the Claims, any judicial exception should be found to be integrated into a practical application, and thus the Claims should be found patent eligible, thereby concluding the eligibility analysis...Applicant submits these Claim elements are not-conventional or routine and not previously known to the industry...Applicant submits that the Claims recite specific limitations beyond the judicial exception that is not "well-understood, routine, conventional" in the field and thus the Claims should be found patent eligible for this reason as well…” The Examiner respectfully disagrees. The claims and the argued elements, are directed to, …scheduling and tracking of security awareness…, which is a problem directed to organizing human activity (i.e. human organizing and scheduling activity campaigns on paper calendars for human to observing human behavior and evaluating human behavior, mitigating human social engineering risks) and a mental process (i.e. humans observing human interactions in training scenarios, human evaluating human behaviors in training scenarios, human scheduling training events on paper calendars for humans that were observed to need additional training from the training scenarios), as established in Step 2A Prong 1. This problem does not specifically arise in the realm of computer technology, but rather, this problem existed and was addressed long before the advent of computers. Thus, the claims do not recite a technical improvement to a technical problem or necessarily roots in computing technologies. Additionally, pursuant to the broadest reasonable interpretation, as an ordered combination, each of the additional elements are computing elements recited at high level of generality implementing the abstract idea, and thus, are no more than applying the abstract idea with generic computer components. Further, these additional elements generally link the abstract idea to a technical environment, namely the environment of a computer and user interface, performing extra solution activities. Therefore, as a whole, the additional elements do not integrate the abstract ideas into a practical application in Step 2A Prong 2. Furthermore, as a whole, the additional elements do not amount to significantly more under Step 2B, since the additional elements are no more than mere instructions to implement the idea using generic computer components (i.e. apply it) and the additional elements append the recited abstract idea to well-understood, routine, and conventional activities in the field as individually evinced by the applicant’s own disclosure, as required by the Berkheimer Memo, see at least [0075]. As stated in the MPEP, "an improvement in the abstract idea itself ... is not an improvement in technology." MPEP 2106.05(a). Mere automation of a manual process or a business method being applied on a general purpose computer is not sufficient to show an improvement in computers or other technology, and the claim must include more than mere instructions to perform the method on a generic component or machinery to qualify as an improvement to an existing technology. MPEP 2106.05(a). Further, “the transformation is extra-solution activity or a field-of-use (i.e., the extent to which (or how) the transformation imposes meaningful limits on the execution of the claimed method steps). A transformation that contributes only nominally or insignificantly to the execution of the claimed method (e.g., in a data gathering step or in a field-of-use limitation) would not provide significantly more (or integrate a judicial exception into a practical application).” MPEP 2106.05(c). Thus, Applicant’s claims do not recite an improvement in technology or integrate into a practical application, but rather mental processes and certain methods of organizing human activities implemented using generic computer components. The limitations are abstract elements that are part of and directed to the recited abstract idea as described above with respect to the first prong of Step 2A, i.e. mental process and organizing human activities, generally linked to a technical environment, i.e. computer and user interface. Even novel and newly discovered judicial exceptions are still exceptions, despite their novelty. July 2015 Update, p. 3; see SAP America Inc. v. Investpic, LLC, No. 2017-2081, slip op. at 2 (Fed Cir. May 15, 2018). Simply reciting specific limitations that narrow the abstract idea does not make an abstract idea non-abstract. 79 Fed. Reg. 74631; buySAFE Inc. v. Google, Inc., 765 F.3d 1350, 1355 (2014); see SAP America at p. 12. As discussed in SAP America, no matter how much of an advance the claims recite, when “the advance lies entirely in the realm of abstract ideas, with no plausibly alleged innovation in the non-abstract application realm,” “[a]n advance of that nature is ineligible for patenting.” Id. at p. 3. Response to Arguments – Prior Art Applicant’s arguments with respect to the rejections have been fully considered, but they are not persuasive. Applicant submits, “…The combination of Hawthorn and Dion fails to teach or suggest: (i) generating, in an electronic calendar, graphical representations of simulated phishing campaigns according to a schedule, each selectable to display campaign metrics and configured to be updated in real-time as each campaign progresses; (ii) automatically executing simulated phishing campaigns according to a schedule, receiving indications of users clicking on links, and identifying a percentage of users who are phish-prone; and (ii) automatically updating, in real-time, the graphical representations in the electronic calendar as each campaign progresses, showing execution status and metrics as users click on links. In contrast, Hawthorn provides dashboards, reports, and campaign summaries, but does not generate or update graphical representations of campaigns in an electronic calendar, nor does it provide selectable calendar entries that display campaign metrics or update in real-time as campaigns progress. Hawthorn's outputs are lists, charts, and scores, not calendar-based, real-time, interactive campaign visuals. In further contrast, Dion provides an event calendar for scheduling and reviewing training events, but does not generate graphical representations of simulated phishing campaigns, does not update such representations in real-time, and does not display campaign metrics or execution status in the calendar. Dion's calendar is for static event registration and review, not for real-time campaign tracking or metric display. Thus, the combination of Hawthorn and Dion does not perform what is claimed, but instead does something else: Hawthorn provides static dashboards and reports outside of a calendar context, and Dion provides a static event calendar unrelated to simulated phishing campaigns or real-time metric updates. For at least these reasons, the combination of Hawthorn and Dion fail to teach or suggest each and every element of independent Claims 1 and 11…” The Examiner respectfully disagrees. Respectfully, Applicant’s argument requires that the each of the features of supporting references are bodily incorporated into primary reference that teach and every element is individually taught by a single reference. However, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981). The test for obviousness is not whether the features of a secondary reference may be bodily incorporated into the structure of the primary reference; nor is it that the claimed invention must be expressly suggested in any one single or in all of the references. See id. Rather, the test is what the combined teachings of the references would have suggested to those of ordinary skill in the art. See id.; In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Under the broadest reasonable interpretation, Hawthorn teaches: generate in an … each of one or more graphical representations of each of the one or more simulated phishing campaigns according to the schedule, each of the one or more graphical representations selectable to display metrics of a corresponding simulated phishing campaign of the one or more phishing campaigns and configured to be updated in real-time as each of the one or more simulated phishing campaigns progresses; (in at least [0068] The “Dashboard” widget 310 may link to and display data including summary information such as (but not limited to) a list of campaigns that have completed, a list of campaigns that are in progress, a list of campaigns that are scheduled for a future start date; performance data for one or more entities with respect to a given campaign and/or across multiple campaigns; performance data for one or more entities with respect to other entities for similar campaigns; and/or the like. Campaign similarity may be determined based on security item and/or training item sophistication scores, number of security items 112 and/or training items 124 presented to users of user device 104, 106, and/or the like. [0117] FIG. 7, a campaign manager 206 may present campaign delivery options 722 via the interactive environment 202. Campaign delivery options 722 may be displayed within the campaign area 324, within a new window, and/or the like. The campaign delivery options 722 may allow a user of security system 102 to configure delivery parameters associated with a campaign. A first delivery option 714 may schedule a campaign for immediate delivery. For example, as soon as the user of security system 102 finalizes and saves a campaign the security item generator 206 may automatically generate a security item 112 and/or training item 124 to be transmitted to the designated recipients at user system 104, 106 based on a template included 114 in the campaign. A second delivery option 716 may allow a user of security system 102 to enter a starting date and/or time and/or an ending date and/or time. When the specified start date and/or time occurs, the security item generator 206 may automatically generate and transmit a security item 112 and/or training item 124 to be transmitted to designated recipients at user system 104, 106 based on at least a template included 114 in the campaign. A third delivery option 718 may allow a user of security system 102 to select a staggered delivery of the campaign. When a user of security system 102 specifies an end date and/or time 720 for delivery, campaign generation and delivery may occur until that date and/or time. [0206] the user risk calculator 212 may analyze the security item interaction data 132, the training item interaction data 134, user property data 136, and/or the technical information 138 of a given user with respect to a set of risk scoring metrics 126. FIGS. 13 and 14 illustrate various examples risk scoring metrics. For example, FIG. 13 shows examples of risk scoring metrics based on technical information and FIG. 14 shows examples of risk scoring metrics based on user security item interaction data 132 and user training item interaction data 134. A user risk score may be calculated at various granularities such as for each security item 112 and/or training item 124, a campaign in progress, for the most recent campaign completed, for all completed campaigns, and/or the like. [0203] The user risk calculator 212 may compare the security item interaction data 132, the training item interaction data 134, user property data 136, and/or user technical information 138 collected for a given user with the set of risk scoring metrics 126 to calculate a risk score for the user. Collected data may be from a campaign currently in progress, the most recent campaign completed, for all completed campaigns, and/or the like. For example, if the user risk calculator 212 may determine that a user has a vulnerable plug-in installed on user system 104, 106 during a campaign. As another example, a user risk calculator 212 may determine, based on feedback from a campaign, that a user has browser software running with outdated versions. A comparison may be done to determine if the versions of the software contain vulnerabilities and if so, this may increase a risk score. As another example, a user risk calculator 212 may determine, based on feedback from a campaign, that a user consistently uses a mobile device and public WiFi networks. Accordingly, the risk score calculator 212 may determine that this feedback increases a risk score. As another example, risk calculator 212 may determine based on feedback from a campaign, that a user associated with user system 104, 106 has a large social media imprint (e.g., accesses social media platforms with a particular frequency). Accordingly, the risk calculator 212 may determine that this feedback increases a risk score. As another example, risk calculator 212 may determine based on feedback from a campaign that a user associated with user system 104, 106 is operating on a known malicious IP address and/or network and/or a country and/or region of origin. Accordingly, the risk calculator 212 may determine that this feedback increases a risk score. Accordingly, in these examples, a risk score of the recipient user may altered, such as those according to the metrics in FIG. 13.) automatically execute according to the schedule the one or more simulated phishing campaigns to communicate simulated phishing communications to devices of users of the entity, receive indications of users clicking on the one or more links in the simulated phishing communications and identify from the received indications a percentage of users of the entity who are phish-prone; and (in at least [0137] A security item 112 and/or training item 124 campaign may be manually started by a user of security system 102 or automatically started based on scheduling parameters. If a campaign is started automatically, the campaign manager 204 may identify the scheduling parameters associated with a campaign from the campaign profile 122 of the campaign. The campaign manager 204 may monitor for a temporal condition to occur that satisfies the scheduling parameters. For example, if a scheduling parameter states that the campaign is to start on Date_A at Time_A, when the campaign manager 204 detects Date_A at Time_A occurs the campaign manager 204 may automatically start the campaign. [0208] a user risk calculator 212 determines that the user opened a security-threat-based message in the campaign; clicked on a security-based threat in the campaign; and entered personal and/or confidential information into a simulated security-base threat, the risk score of the recipient user may altered by seventeen (17) according to the metrics in FIG. 14. If the user risk calculator 212 determines that the user completed three (3) training sessions during a campaign based on the user training item data 134, a risk score of the user is altered by 5%. [0212] The calculated risk scores may be used to perform various actions. For example, the risk assessment manager 110 may use a calculated risk scores to influence, guide, and/or determine a frequency and/or sophistication level of future security items 112 and/or training items 124. For example, risk assessment manager 110 may increase the frequency of presenting security item 112 and/or training item 124 to a user who has a higher risk score than for a user with a lower risk score. [0222] 1512 may display the title 1514 of the campaign, the number of times 1516 each security item 112 and/or training item 124 in the campaign was sent; the number of times 1518 each of each security item 112 and/or training item 124 included a predefined action (e.g., open a message, click on a link, watch a video, attempt a password generation, and/or the like), a number of detected vulnerabilities 1520 (incorrect answers, incorrect interactions, and/or the like); the number of times 1522 each message resulted in a security compromise (e.g., recipient entered personal and/or confidential information, downloaded an insecure item, clicked on an insecure link, etc.); the number of multiple security compromises 1524 in each security item 112 and/or training item 124 for the same user (e.g., a user clicks on multiple insecure links, a user downloads multiple insecure items, a user answers multiple questions incorrectly, a combination of different security compromising actions, and/or the like) [0226] FIG. 17 shows another example of information that may be displayed to the user of security system 102 as part of the campaign summary and/or report. For example, FIG. 17 illustrates an overall risk score 1702 has been calculated for the client when compared to other clients subscribing to the risk assessment manager 110. A client's overall risk score may be based on the risk score associated with its employees. A client's overall risk score may be calculated based on the metrics discussed above with respect to FIG. 15 (e.g., open/interactions/vulnerable/trained/reported/compromised). [0231] FIG. 19 illustrates another report that may be presented to the user in the interactive environment 202. In the example shown in FIG. 19, a list of groups 1902 within the client may be displayed. This list may identify each group and the number of employees in each group. When a user selects a group from this list 1902, the data presenter 218 may display the name 1904 of the group; the number of users 1906 in the group; and the risk score 1908 of the group. The data presenter 218 may also display a list 1910 of each employee within the group. The employee's communication address 1912, first name 1914, last name 1916, the date 1918 the employee was added to the campaign, and risk score 1920 may also displayed to the user. The user may select one of the employees to view the statistics of the user for one or more campaigns or for all of the campaigns as a whole.) automatically update in real-time the one or more graphical representations of each of the one or more simulated phishing campaigns, while displayed in the … according to the schedule as each of the one or more simulated phishing campaigns progresses, a status of execution and metrics of the one or more simulated phishing campaigns as the one or more simulated phishing campaigns progresses with users clicking on the one or more links in the simulated phishing communications. (in at least [0068] The “Dashboard” widget 310 may link to and display data including summary information such as (but not limited to) a list of campaigns that have completed, a list of campaigns that are in progress, a list of campaigns that are scheduled for a future start date; performance data for one or more entities with respect to a given campaign and/or across multiple campaigns; performance data for one or more entities with respect to other entities for similar campaigns; and/or the like. Campaign similarity may be determined based on security item and/or training item sophistication scores, number of security items 112 and/or training items 124 presented to users of user device 104, 106 [0203] Item adjuster 218 may dynamically determine the type of security item 112 and/or training item 124 to be sent to the recipients based on a performance history with respect to previous security items 112 and/or training items 124 in the campaign and/or previous campaigns; a risk score; a role within the company; a company group; and/or the like. For example, if a user has successfully interacted with previous security items 112 and/or training items 124 at a given sophistication level, item adjuster 218 may dynamically update the campaign such that this user starts to receive security items 112 and/or training items 124 of a higher sophistication level (e.g., more difficult security-based questions, more legitimate looking simulated phishing messages, etc.). [0206] FIG. 13 shows examples of risk scoring metrics based on technical information and FIG. 14 shows examples of risk scoring metrics based on user security item interaction data 132 and user training item interaction data 134. A user risk score may be calculated at various granularities such as for each security item 112 and/or training item 124, a campaign in progress, for the most recent campaign completed, for all completed campaigns, and/or the like. [0207] The user risk calculator 212 may compare the security item interaction data 132, the training item interaction data 134, user property data 136, and/or user technical information 138 collected for a given user with the set of risk scoring metrics 126 to calculate a risk score for the user. Collected data may be from a campaign currently in progress, the most recent campaign completed, for all completed campaigns, and/or the like. [0208] if a user risk calculator 212 determines that the user opened a security-threat-based message in the campaign; clicked on a security-based threat in the campaign; and entered personal and/or confidential information into a simulated security-base threat, the risk score of the recipient user may altered by seventeen (17) according to the metrics in FIG. 14. If the user risk calculator 212 determines that the user completed three (3) training sessions during a campaign based on the user training item data 134, a risk score of the user is altered by 5%. [0221] The report area 1504 may present the user with a list of campaigns 1504 associated with one or more clients for which the user is authorized to view. The user also may be presented with one or more options for selecting which campaigns are displayed. For example, a filtering option 1506 may allow a user of security system 102 to enter dates/times, which results in only the campaigns matching these criteria to be displayed (or filtered out). Another filtering option 1508 may allow a user to select all of the campaigns that are currently pending, running, or completed to be displayed. [0222] the number of multiple security compromises 1524 in each security item 112 and/or training item 124 for the same user (e.g., a user clicks on multiple insecure links, a user downloads multiple insecure items, a user answers multiple questions incorrectly, a combination of different security compromising actions, and/or the like); the number of users 1526 considered to have been “trained” during the campaign; the number of times 1528 users reported an applicable security item 112 and/or training item 124 to an administrator, manager, etc.; the starting date 1530 of the campaign the stopping date of the campaign; the status 1532 of the campaign (e.g., pending, running, completed, etc.); the user 1534 who created the campaign; and/or the like. Each campaign may have different reporting items and the reporting items listed above. [0229] FIG. 17 illustrates a graph 1706 that may be displayed to show a client's risk score over time. In this example, the user may be able to select a temporal-based filter 1708 to see how a client's risk score changed on a minute, hourly, daily, weekly basis, and/or monthly basis. FIG. 17 also illustrates a time distribution 1710 of user interactions with security items 112 and/or training items 124 during the selected campaign. In this example, a time distribution 1710 may display a year's worth of data, each discrete division representing days and further months. As an example, various graphical features may be used to illustrate campaign reporting. For example, the darker the shading may indicate more interactions with security items 112 and/or training items 124 on a particular day. This may be expanded to view a Month/Week/Day view and allow a viewer to identify when users are more likely to interact with a security item 112 and/or training item 124 such as early morning, late at night, at home vs. at office, etc.) Although implied, Hawthorn does not expressly disclose the following limitations, which however, are taught by Dion, …based on the comparison…(in at least [0006] A health care professional can take a competency test on a particular topic and input his/her responses at the remote provider system. The health care professional's responses are evaluated, and an assessment of his/her skills displayed at the provider system. The assessment particularly points out those areas, if any, where the health care professional's knowledge is deficient. If the health care professional has any areas which need improvement, a list of relevant courses is also displayed at the provider system. The health care professional may then select a desired course from the user interface. [0060] administrator can by selecting allow student to challenge the present course by testing without completing the course, test out score with a box for the administrator to enter the minimum score acceptable if the student is to test out, course view before test?) …electronic calendar… (in at least [0061] FIG. 6 shows an event calendar interface screen for scheduling and reviewing events. A computer program product delivered to users and administrators is represented by a calendar/my events interface screen 199. The robustness of the education competency and compliance management method, system, and computer product is exhibited by the user choices for scheduling and tracking calendar events on a my events interface screen 199. The displayed required events are generated from the administrator grouping of an individual, individual specific items by administration, and items desired by the individual for advancement and enrichment. The navigation bar 153 shows the most recent navigation selection to be My Events. Though not shown in this figure the navigation area 151 is included with screen 199. Immediately below bar 153 is a screen title area 201 displaying “Registered Events for: user name here”. The user name has been omitted for discussion purposes. The system will display the events selected for the user name that has logged into the system. Below area 201 is an area 203 with instructions “These are the events for which you are currently registered. To view or register for additional events, go to the Event Calendar”…To the right of area 207 is an update button 209 that allows a user to update the current calendar. Just above button 209 is the spot 157 allows administrators to make changes to the present page instructions. The display metaphor of my events is a calendar 217 the currently selected month is displayed in an area 213, the current example displays “May 2007”. The user has a simple task to review past events by selecting a button 211 titled “<<Prior”. If the user wishes to view future commitments a button 215 titled “Next >>” [0064] Below area 245 is an area 249 displaying “Report Contents” on the left of the screen 247. To the right of area 249 is a button 259 displaying “Submit”. At the bottom of screen 247 is an area 2491 with a duplicate button 259 displaying “Submit”. The report contents are selected from area 251 displaying “Default Report Fields:”. Specifically to the right of area 251 is an area 253 with a list of options to be included as defaults in the intended report. These options include: User Name, Course, Test Pass Date, Expire Date, Taken?, Expired?, Passed?, and Score.) generate in an electronic calendar each of one or more graphical representations of each of the one or more simulated … campaigns according to the schedule, each of the one or more graphical representations selectable to display metrics of a corresponding simulated … campaign of the one or more … campaigns and configured to be updated in real-time as each of the one or more simulated … campaigns progresses (in at least [0006] If the health care professional has any areas which need improvement, a list of relevant courses is also displayed at the provider system. The health care professional may then select a desired course from the user interface. The machine readable media maintains a record of the health care professional's assessment as well as a list of completed courses. This information may then be provided to a licensing entity for credit. [0012] g. facilitating student access scheduling by a simple to use calendar for real world adjustments, h. evaluating student progress and performance with the ability to perform practice testing of course knowledge before actual testing, k. facilitating manipulating of test format and presentation by approved individuals within an organization to avoid inaccurate measurements by student answer memorization, specifically, questions can be randomized, remediated, and the administrator can select to show the student the score or not show the score depending on the specific test requirements, [0044] My Courses in Progress—This is a list of courses (both on-line and off-line education) the user has started but has not completed and passed. Once completed and passed, they will appear in the My Transcript section. [0061] FIG. 6 shows an event calendar interface screen for scheduling and reviewing events. A computer program product delivered to users and administrators is represented by a calendar/my events interface screen 199. The robustness of the education competency and compliance management method, system, and computer product is exhibited by the user choices for scheduling and tracking calendar events on a my events interface screen 199. The displayed required events are generated from the administrator grouping of an individual, individual specific items by administration, and items desired by the individual for advancement and enrichment. The navigation bar 153 shows the most recent navigation selection to be My Events. Though not shown in this figure the navigation area 151 is included with screen 199. Immediately below bar 153 is a screen title area 201 displaying “Registered Events for: user name here”. The user name has been omitted for discussion purposes. The system will display the events selected for the user name that has logged into the system. Below area 201 is an area 203 with instructions “These are the events for which you are currently registered. To view or register for additional events, go to the Event Calendar”…To the right of area 207 is an update button 209 that allows a user to update the current calendar. Just above button 209 is the spot 157 allows administrators to make changes to the present page instructions. The display metaphor of my events is a calendar 217 the currently selected month is displayed in an area 213, the current example displays “May 2007”. The user has a simple task to review past events by selecting a button 211 titled “<<Prior”. If the user wishes to view future commitments a button 215 titled “Next >>” [0064] Below area 245 is an area 249 displaying “Report Contents” on the left of the screen 247. To the right of area 249 is a button 259 displaying “Submit”. At the bottom of screen 247 is an area 2491 with a duplicate button 259 displaying “Submit”. The report contents are selected from area 251 displaying “Default Report Fields:”. Specifically to the right of area 251 is an area 253 with a list of options to be included as defaults in the intended report. These options include: User Name, Course, Test Pass Date, Expire Date, Taken?, Expired?, Passed?, and Score. [0073] FIG. 3 c shows the ease that a profile of an individual within an organization can be updated by the individual or administrator. FIG. 4 shows the simplicity of accessing an individual's transcript for authorized individuals. FIG. 5 shows the breath and flexibility of course list access for individuals and administrators. FIG. 5 b shows the robustness of the education competency and compliance management method, system, and computer product is exhibited by the user choices for courses, tests, and surveys functionality provided on the screen 300. FIG. 6 shows an event calendar interface screen for scheduling and reviewing events. The flexibility and ease of entry, tracking, and updating provide a user satisfying experience.) In the same field of invention, scheduling human trainings. At the time the invention was filed, it would have been obvious for one of ordinary skill in the art to have modified the teachings of Hawthorn, as taught by Dion above, with a reasonable expectation of success if arriving at the claimed invention. One of ordinary skill in the art would have been motivated to make this modification to the teachings of Hawthorn with the motivation of, …providing test performance feedback to students for improvement and advancement….providing test performance feedback to sponsoring organizations for assessment and improvement planning…providing test performance reports that are integral to the education competency and compliance management (avoiding third party software complications) to certifying or accreditation organizations…allowed to add to, delete, modify, and adapt course content and testing to keep an organization improving and growing through improved task performance… comprehensive approach to improve the user experience and simplify education compliance tasks…., as recited in Dion. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 11-23 of U.S. Pat. No. 11599838. Although the claims at issue are not identical, they are not patentably distinct from each other because the Claims reciter the same inventive concept with the same features being used in the same field of endeavor. For example: Instant Application: 18116404 Claim 1 US Pat No.: 16013486 Claim 11, 21 compare one or more attributes of an entity to one or more attributes of other entities; Claim 11: a tool executable on the one or more processors and configured to compare the attributes for the entity of an organization to attributes of other entities of other organizations, responsive to receiving the attributes, determine, based on the comparison, a configuration and schedule of one or more simulated phishing campaigns to be executed to communicate simulated phishing communications to users of the entity to get users to click on one or more links in the simulated phishing communications; Claim 11: determine, based at least on the comparison, a configuration for each of a baseline simulated phishing campaign to be executed to communicate electronic simulated phishing communications to users of the entity to get users to click on a link in the electronic simulated phishing communications, generate in an electronic calendar each of one or more graphical representations of each of the one or more simulated phishing campaigns according to the schedule, each of the one or more graphical representations selectable to display metrics of a corresponding simulated phishing campaign of the one or more phishing campaigns and configured to be updated in real-time as each of the one or more simulated phishing campaigns progresses; Claim 21: (Previously Presented) The system of claim 11, wherein the one or more graphical representations are organized into one or more metrics for the corresponding campaign and one or more metrics for each user. automatically execute according to the schedule the one or more simulated phishing campaigns to communicate simulated phishing communications to devices of users of the entity, receive indications of users clicking on the one or more links in the simulated phishing communications and identify from the received indications a percentage of users of the entity who are phish-prone; and Claim 11: automatically determine a schedule of each of the baseline simulated phishing campaigns, the electronic based training and the one or more simulated phishing campaigns subsequent to the baseline simulated phishing campaign; and wherein the one or more simulated phishing campaigns communicate electronic simulated phishing communications to devices of users, receive indications of users clicking on the link of the electronic phishing communications and identify from the received indications the percentage of users of the entity who are phish-prone, automatically update in real-time the one or more graphical representations of each of the one or more simulated phishing campaigns, while displayed in the electronic calendar according to the schedule as each of the one or more simulated phishing campaigns progresses, a status of execution and metrics of the one or more simulated phishing campaigns as the one or more simulated phishing campaigns progresses with users clicking on the one or more links in the simulated phishing communications Claim 11: automatically generate in an electronic calendar according to the schedule, one or more graphical representations of each of the baseline simulated phishing campaigns, the electronic based training and the one or more simulated phishing campaigns subsequent to the baseline simulated phishing campaign, wherein the server is further configured to update, in a display of the one or more graphical representations in the electronic calendar, a status of execution of a corresponding campaign as the corresponding campaign progresses with users clicking on the link of the electronic phishing communications and, responsive to selecting the one or more graphical representations, display in a user interface a percentage of users who are phish-prone of the corresponding campaign in comparison with a percentage of phish-prone users of the other entities. Claim 21: (Previously Presented) The system of claim 11, wherein the one or more graphical representations are organized into one or more metrics for the corresponding campaign and one or more metrics for each user. Claims 1-20 of the Instant Application are substantially similar to claim 11-23 of U.S. Pat. No. 16013486. The Independent Claims recite substantially the same method. Therefore, it would have been obvious to one of ordinary skill in the art to have modified the Claims of the cited US Patent and accordingly the Independent Claim of the Instant Application are obvious variants of those recited in the US Patents, US16013486. The respective corresponding Dependent Claims recite substantially similar limitations and are therefore also obvious between the US Patent and the Instant Application. Claim Rejections – 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claim 1 (similarly 11) recite, “ compare one or more attributes of an entity to one or more attributes of other entities; determine, based on the comparison, a configuration and schedule of one or more simulated phishing campaigns to be executed to communicate simulated phishing communications to users of the entity to get users to … in the simulated phishing communications; generate in an … calendar each of one or more graphical representations of each of the one or more simulated phishing campaigns according to the schedule, each of the one or more graphical representations … to display metrics of a corresponding simulated phishing campaign of the one or more phishing campaigns and configured to be updated in real-time as each of the one or more simulated phishing campaigns progresses; automatically execute according to the schedule the one or more simulated phishing campaigns to communicate simulated phishing communications to devices of users of the entity, receive indications of users … in the simulated phishing communications and identify from the received indications a percentage of users of the entity who are phish-prone; and automatically update in real-time the one or more graphical representations of each of the one or more simulated phishing campaigns, while displayed in the … calendar according to the schedule as each of the one or more simulated phishing campaigns progresses, a status of execution and metrics of the one or more simulated phishing campaigns as the one or more simulated phishing campaigns progresses with users … in the simulated phishing communications.“ Analyzing under Step 2A, Prong 1: The limitations regarding, …compare one or more attributes of an entity to one or more attributes of other entities; determine, based on the comparison, a configuration and schedule of one or more simulated phishing campaigns to be executed to communicate simulated phishing communications to users of the entity to get users to … in the simulated phishing communications; generate in an … calendar each of one or more graphical representations of each of the one or more simulated phishing campaigns according to the schedule, each of the one or more graphical representations … to display metrics of a corresponding simulated phishing campaign of the one or more phishing campaigns and configured to be updated in real-time as each of the one or more simulated phishing campaigns progresses; automatically execute according to the schedule the one or more simulated phishing campaigns to communicate simulated phishing communications to devices of users of the entity, receive indications of users … in the simulated phishing communications and identify from the received indications a percentage of users of the entity who are phish-prone; and automatically update in real-time the one or more graphical representations of each of the one or more simulated phishing campaigns, while displayed in the … calendar according to the schedule as each of the one or more simulated phishing campaigns progresses, a status of execution and metrics of the one or more simulated phishing campaigns as the one or more simulated phishing campaigns progresses with users … in the simulated phishing communications.…, under the broadest reasonable interpretation, can include a human using their mind and using pen and paper to perform the these identified limitations; therefore, the claims are directed to a mental process. Further, …compare one or more attributes of an entity to one or more attributes of other entities; determine, based on the comparison, a configuration and schedule of one or more simulated phishing campaigns to be executed to communicate simulated phishing communications to users of the entity to get users to … in the simulated phishing communications; generate in an … calendar each of one or more graphical representations of each of the one or more simulated phishing campaigns according to the schedule, each of the one or more graphical representations … to display metrics of a corresponding simulated phishing campaign of the one or more phishing campaigns and configured to be updated in real-time as each of the one or more simulated phishing campaigns progresses; automatically execute according to the schedule the one or more simulated phishing campaigns to communicate simulated phishing communications to devices of users of the entity, receive indications of users … in the simulated phishing communications and identify from the received indications a percentage of users of the entity who are phish-prone; and automatically update in real-time the one or more graphical representations of each of the one or more simulated phishing campaigns, while displayed in the … calendar according to the schedule as each of the one or more simulated phishing campaigns progresses, a status of execution and metrics of the one or more simulated phishing campaigns as the one or more simulated phishing campaigns progresses with users … in the simulated phishing communications…, under the broadest reasonable interpretation, are human organizing and scheduling activity campaigns on paper calendars for human to observing human behavior and evaluating human behavior, therefore it is, managing interactions between people. Thus, the claims are directed to certain methods of organizing human activity. Accordingly, the claims are directed to a mental process, certain methods of organizing human activity, and thus, the claims are directed to an abstract idea under the first prong of Step 2A. Analyzing under Step 2A, Prong 2: This judicial exception is not integrated into a practical application under the second prong of Step 2A. In particular, the claims recite the additional elements beyond the recited abstract idea identified under Step 2A, Prong 1, such as: Claim 1, 11: A system comprising: one or more processors, coupled to memory, devices of users, electronic, clicking on the one or more links, electronic calendar each of one or more graphical representations, selectable Claim 2, 7, 12, 17: user interface , and pursuant to the broadest reasonable interpretation, as an ordered combination, each of the additional elements are computing elements recited at high level of generality implementing the abstract idea, and thus, are no more than applying the abstract idea with generic computer components. Further, these additional elements generally link the abstract idea to a technical environment, namely the environment of a computer. Additionally, with respect to, “compare…”, “receive…”, “clicking on the one or more links”, “update…,” “display…”, “generate…”, these elements do not add a meaningful limitations to integrate the abstract idea into a practical application because they are extra-solution activity, pre and post solution activity - i.e. data gathering – “compare…”, “receive…”, “clicking on the one or more links”,, data output – “update…,” “display…”, “generate…” Analyzing under Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception under Step 2B. As noted above, the aforementioned additional elements beyond the recited abstract idea are not sufficient to amount to significantly more than the recited abstract idea because, as an order combination, the additional elements are no more than mere instructions to implement the idea using generic computer components (i.e. apply it). Additionally, as an order combination, the additional elements append the recited abstract idea to well-understood, routine, and conventional activities in the field as individually evinced by the applicant’s own disclosure, as required by the Berkheimer Memo, in at least: [0075] The client102 and server106 may be deployed as and/or executed on any type and form of computing device, e.g. a computer, network device or appliance capable of communicating on any type and form of network and performing the operations described herein. FIGs.1C and 1D depict block diagrams of a computing device100 useful for practicing an embodiment of the client102 or a server106. As shown in FIGS. 1C and 1D, each computing device100 includes a central processing unit (CPU)121, and a main memory unit122. As shown in FIG. 1C, a computing device100 may include a storage device128, an installation device116, a network interface118, an I/O controller123, display devices 124a-124n, a keyboard126, and a pointing device127, e.g. a mouse. The storage device128 may include, without limitation, an operating system129, a software131, and a software of a simulated phishing attack system120. As shown in FIG. 1D, each computing device100 may also include additional optional elements, e.g. a memory port103, a bridge170, one or more input/output devices 130a-130n (generally referred to using reference numeral130), I/O ports 142a-142b, and a cache memory140 in communication with the central processing unit121. [0085] Computing device100 (e.g., client device102) may also install software or application from an application distribution platform. Examples of application distribution platforms include the App Store for iOS provided by Apple, Inc., the Mac App Store provided by Apple, Inc., GOOGLE PLAY for Android OS provided by Google Inc., Chrome Webstore for CHROME OS provided by Google Inc., and Amazon Appstore for Android OS and KINDLE FIRE provided by Amazon.com, Inc. An application distribution platform may facilitate installation of software on a client device102. An application distribution platform may include a repository of applications on a server106 or a cloud108, which the clients 102a-102n may access over a network104. An application distribution platform may include application developed and provided by various developers. A user of a client device102 may select, purchase and/or download an application via the application distribution platform. [0087] A computing device100 of the sort depicted in FIGS. 1B and 1C may operate under the control of an operating system, which controls scheduling of tasks and access to system resources. The computing device100 can be running any operating system such as any of the versions of the MICROSOFT WINDOWS operating systems, the different releases of the Unix and Linux operating systems, any version of the MAC OS for Macintosh computers, any embedded operating system, any real-time operating system, any open source operating system, any proprietary operating system, any operating systems for mobile computing devices, or any other operating system capable of running on the computing device and performing the operations described herein. Typical operating systems include, but are not limited to: WINDOWS2000, WINDOWS Server2012, WINDOWS CE, WINDOWS Phone, WINDOWS XP, WINDOWS VISTA, and WINDOWS7, WINDOWS RT, and WINDOWS8 all of which are manufactured by Microsoft Corporation of Redmond, Washington; MAC OS and iOS, manufactured by Apple, Inc. of Cupertino, California; and Linux, a freely-available operating system, e.g. Linux Mint distribution ("distro") or Ubuntu, distributed by Canonical Ltd. of London, United Kingdom; or Unix or other Unix-like derivative operating systems; and Android, designed by Google, of Mountain View, California, among others. Some operating systems, including, e.g., the CHROME OS by Google, may be used on zero clients or thin clients, including, e.g., CHROMEBOOKS. [0088] The computing device100 (i.e., computer system) can be any workstation, telephone, desktop computer, laptop or notebook computer, netbook, ULTRABOOK, tablet, server, handheld computer, mobile telephone, smartphone or other portable telecommunications device, media playing device, a gaming system, mobile computing device, or any other type and/or form of computing, telecommunications or media device that is capable of communication. The computing device100 has sufficient processor power and memory capacity to perform the operations described herein. In some embodiments, the computing device100 may have different processors, operating systems, and input devices consistent with the device. The Samsung GALAXY smartphones, e.g., operate under the control of Android operating system developed by Google, Inc. GALAXY smartphones receive input via a touch interface. [0152] The server106 includes a user interface291 and a display293. The user interface291 enables a security awareness program system administrator to interact with the simulated phishing campaign manager250, the security awareness program manager280, the security awareness program creator270, and the query module271. [0153] The system200 also includes client102. A client102 may be a target of any simulated phishing attack or actual phishing attack. For example, the client may be an employee, member, or independent contractor working for a company that is performing a security checkup or conducts ongoing simulated phishing attacks to maintain security. The client102 may be any device used by the client. The client need not own the device for it to be considered a client device102. The client102 may be any computing device, such as a desktop computer, a laptop, a mobile device, or any other computing device. In some embodiments, the client102 may be a server or set of servers accessed by the client. For example, the client may be the employee or a member of a company. The client may access a server that is e.g. owned or managed or otherwise associated with the company. Such a server may be a client102. [0154] In some embodiments, the client102 may further include a user interface266 such as a keyboard, a mouse, a touch screen, or any other appropriate user interface. This may be a user interface that is e.g. connected directly to a client102, such as, for example, a keyboard connected to a mobile device, or may be connected indirectly to a client102, such as, for example, a user interface of a client device102 used to access a server client102. The client102 may include a display268, such as a screen, a monitor connected to the device in any manner, or any other appropriate display. [0199] While various embodiments of the methods and systems have been described, these embodiments are exemplary and in no way do they limit the scope of the described methods or systems. Those having skill in the relevant art can effect changes to form and details of the described methods and systems without departing from the broadest scope of the described methods and systems. Thus, the scope of the methods and systems described herein should not be limited by any of the exemplary embodiments and should be defined in accordance with the accompanying claims and their equivalents. Furthermore, as an ordered combination, these elements amount to generic computer components receiving or transmitting data over a network, performing repetitive calculations, electronic record keeping, and storing and retrieving information in memory, which, as held by the courts, are well-understood, routine, and conventional. See MPEP 2106.05(d). Moreover, the remaining elements of dependent claims do not transform the recited abstract idea into a patent eligible invention because these remaining elements merely recite further abstract limitations that provide nothing more than simply a narrowing of the abstract idea recited in the independent claims. Looking at these limitations as an ordered combination adds nothing additional that is sufficient to amount to significantly more than the recited abstract idea because they simply provide instructions to use a generic arrangement of generic computer components to “apply” the recited abstract idea, perform insignificant extra-solution activity, and generally link the abstract idea to a technical environment. Thus, the elements of the claims, considered both individually and as an ordered combination, are not sufficient to ensure that the claim as a whole amounts to significantly more than the abstract idea itself. Since there are no limitations in these claims that transform the exception into a patent eligible application such that these claims amount to significantly more than the exception itself, claims 1-20 are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter. Claim Rejections – 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: Determining the scope and contents of the prior art. Ascertaining the differences between the prior art and the claims at issue. Resolving the level of ordinary skill in the pertinent art. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable by US Patent Publication to US20170244746A1 to Hawthorn et al., (hereinafter referred to as “Hawthorn”) in view of US Patent Publication to US20080318197A1 to Dion et al., (hereinafter referred to as “Dion”) As per Claim 1, Hawthorn teaches: (Currently Amended) A system comprising: one or more processors, coupled to memory and configured to: ([0044]) compare one or more attributes of an entity to one or more attributes of other entities; (in at least [0036] performing an initial risk assessment by transmitting a security item and/or a training item from a security system to a user system to obtain response data associated with the transmitted security item and/or training item. Response data may be used to calculate an initial risk score associated with a specific user. Subsequent security item and/or training item may be transmitted to a user system, where the subsequent security item and/or training item is determined based on the risk score associated with a user. Interactions via a user system with subsequent security items and/or training items may result in subsequent response data that may be transmitted to security system where a user's risk score may be updated and/or recalculated based on the subsequent response data. [0227] A weighted score may be applied to each interaction between a user and a security item 112 and/or training item 124; whether that user is a repeat offender; whether that user interacts with security items 112 and/or training items 124 from different devices (laptop/tablet/phone) or multiple source IP addresses (work/home); whether that user interacted with security items 112 and/or training items 124 from vulnerable devices (out of date browser/plugins); whether that user completes training or reports applicable security items 112 and/or training items 124; and/or the like. Each interaction may be scored, and the aggregated scores may be normalized. The normalized scores may compared using a standard deviation calculation to arrive at a “ThreatScore”. This ThreatScore may be compared against industry vertical or overall, and may be used to see trending data for users/groups/company (improving/declining) over time. [0228] A client's risk score 1702 may be calculated in a real-time manner and/or according to various scripts that execute on a minute, hourly, daily, monthly, and/or a yearly basis. FIG. 17 illustrates a user may be displayed a list/graph of the risk scores 1704 for each group/department of a client. A group's risk score 1702 may be calculated based on the risk scores of the individuals within that group. For example, all of the risk scores of the individuals within a group may be added and/or the averaged to obtain the group's overall risk score 1702. A user of security system 102 may be able to select one or more of these groups to see a performance and/or technical information with respect to a given campaign, multiple campaigns, and/or all campaigns based on the group's employees. [0232] Statistics also may calculated on the data collected during one or more campaigns. For example, collected data may be analyzed and compared to data available from one or more data brokers. Accordingly, risk assessment manager 110 may predict if someone is more susceptible to security-threat-based messages based on their demographic data. For example, if an analysis of the data shows that people who shop at given store and drive a red van are more likely to interact with a security item 112 and/or training item 124 in a compromising manner, risk assessment manager 110 may score a user as more risky before they are sent a security-threat-based message.) determine, …, a configuration and schedule of one or more simulated phishing campaigns to be executed to communicate simulated phishing communications to users of the entity to get users to click on one or more links in the simulated phishing communications; (in at least [0117] FIG. 7, a campaign manager 206 may present campaign delivery options 722 via the interactive environment 202. Campaign delivery options 722 may be displayed within the campaign area 324, within a new window, and/or the like. The campaign delivery options 722 may allow a user of security system 102 to configure delivery parameters associated with a campaign. A first delivery option 714 may schedule a campaign for immediate delivery. For example, as soon as the user of security system 102 finalizes and saves a campaign the security item generator 206 may automatically generate a security item 112 and/or training item 124 to be transmitted to the designated recipients at user system 104, 106 based on a template included 114 in the campaign. A second delivery option 716 may allow a user of security system 102 to enter a starting date and/or time and/or an ending date and/or time. [0136] The risk assessment manager 110 may use these inputs to calculate a user risk score. This user risk score may provide an organization with a quantified indication as to the level of risk a given user exposes the organization to with respect to the security of its computing networks. The user risk score may be used to influence, guide, and /or determine the frequency and sophistication level of future campaigns, security items and/or training items 124. [0208] a user risk calculator 212 determines that the user opened a security-threat-based message in the campaign; clicked on a security-based threat in the campaign; and entered personal and/or confidential information into a simulated security-base threat, the risk score of the recipient user may altered by seventeen (17) according to the metrics in FIG. 14. If the user risk calculator 212 determines that the user completed three (3) training sessions during a campaign based on the user training item data 134, a risk score of the user is altered by 5%. [0222] 1512 may display the title 1514 of the campaign, the number of times 1516 each security item 112 and/or training item 124 in the campaign was sent; the number of times 1518 each of each security item 112 and/or training item 124 included a predefined action (e.g., open a message, click on a link, watch a video, attempt a password generation, and/or the like), a number of detected vulnerabilities 1520 (incorrect answers, incorrect interactions, and/or the like); the number of times 1522 each message resulted in a security compromise (e.g., recipient entered personal and/or confidential information, downloaded an insecure item, clicked on an insecure link, etc.); the number of multiple security compromises 1524 in each security item 112 and/or training item 124 for the same user (e.g., a user clicks on multiple insecure links, a user downloads multiple insecure items, a user answers multiple questions incorrectly, a combination of different security compromising actions, and/or the like)) generate in an … each of one or more graphical representations of each of the one or more simulated phishing campaigns according to the schedule, each of the one or more graphical representations selectable to display metrics of a corresponding simulated phishing campaign of the one or more phishing campaigns and configured to be updated in real-time as each of the one or more simulated phishing campaigns progresses; (in at least [0068] The “Dashboard” widget 310 may link to and display data including summary information such as (but not limited to) a list of campaigns that have completed, a list of campaigns that are in progress, a list of campaigns that are scheduled for a future start date; performance data for one or more entities with respect to a given campaign and/or across multiple campaigns; performance data for one or more entities with respect to other entities for similar campaigns; and/or the like. Campaign similarity may be determined based on security item and/or training item sophistication scores, number of security items 112 and/or training items 124 presented to users of user device 104, 106, and/or the like. [0117] FIG. 7, a campaign manager 206 may present campaign delivery options 722 via the interactive environment 202. Campaign delivery options 722 may be displayed within the campaign area 324, within a new window, and/or the like. The campaign delivery options 722 may allow a user of security system 102 to configure delivery parameters associated with a campaign. A first delivery option 714 may schedule a campaign for immediate delivery. For example, as soon as the user of security system 102 finalizes and saves a campaign the security item generator 206 may automatically generate a security item 112 and/or training item 124 to be transmitted to the designated recipients at user system 104, 106 based on a template included 114 in the campaign. A second delivery option 716 may allow a user of security system 102 to enter a starting date and/or time and/or an ending date and/or time. When the specified start date and/or time occurs, the security item generator 206 may automatically generate and transmit a security item 112 and/or training item 124 to be transmitted to designated recipients at user system 104, 106 based on at least a template included 114 in the campaign. A third delivery option 718 may allow a user of security system 102 to select a staggered delivery of the campaign. When a user of security system 102 specifies an end date and/or time 720 for delivery, campaign generation and delivery may occur until that date and/or time. [0206] the user risk calculator 212 may analyze the security item interaction data 132, the training item interaction data 134, user property data 136, and/or the technical information 138 of a given user with respect to a set of risk scoring metrics 126. FIGS. 13 and 14 illustrate various examples risk scoring metrics. For example, FIG. 13 shows examples of risk scoring metrics based on technical information and FIG. 14 shows examples of risk scoring metrics based on user security item interaction data 132 and user training item interaction data 134. A user risk score may be calculated at various granularities such as for each security item 112 and/or training item 124, a campaign in progress, for the most recent campaign completed, for all completed campaigns, and/or the like. [0203] The user risk calculator 212 may compare the security item interaction data 132, the training item interaction data 134, user property data 136, and/or user technical information 138 collected for a given user with the set of risk scoring metrics 126 to calculate a risk score for the user. Collected data may be from a campaign currently in progress, the most recent campaign completed, for all completed campaigns, and/or the like. For example, if the user risk calculator 212 may determine that a user has a vulnerable plug-in installed on user system 104, 106 during a campaign. As another example, a user risk calculator 212 may determine, based on feedback from a campaign, that a user has browser software running with outdated versions. A comparison may be done to determine if the versions of the software contain vulnerabilities and if so, this may increase a risk score. As another example, a user risk calculator 212 may determine, based on feedback from a campaign, that a user consistently uses a mobile device and public WiFi networks. Accordingly, the risk score calculator 212 may determine that this feedback increases a risk score. As another example, risk calculator 212 may determine based on feedback from a campaign, that a user associated with user system 104, 106 has a large social media imprint (e.g., accesses social media platforms with a particular frequency). Accordingly, the risk calculator 212 may determine that this feedback increases a risk score. As another example, risk calculator 212 may determine based on feedback from a campaign that a user associated with user system 104, 106 is operating on a known malicious IP address and/or network and/or a country and/or region of origin. Accordingly, the risk calculator 212 may determine that this feedback increases a risk score. Accordingly, in these examples, a risk score of the recipient user may altered, such as those according to the metrics in FIG. 13.) automatically execute according to the schedule the one or more simulated phishing campaigns to communicate simulated phishing communications to devices of users of the entity, receive indications of users clicking on the one or more links in the simulated phishing communications and identify from the received indications a percentage of users of the entity who are phish-prone; and (in at least [0137] A security item 112 and/or training item 124 campaign may be manually started by a user of security system 102 or automatically started based on scheduling parameters. If a campaign is started automatically, the campaign manager 204 may identify the scheduling parameters associated with a campaign from the campaign profile 122 of the campaign. The campaign manager 204 may monitor for a temporal condition to occur that satisfies the scheduling parameters. For example, if a scheduling parameter states that the campaign is to start on Date_A at Time_A, when the campaign manager 204 detects Date_A at Time_A occurs the campaign manager 204 may automatically start the campaign. [0208] a user risk calculator 212 determines that the user opened a security-threat-based message in the campaign; clicked on a security-based threat in the campaign; and entered personal and/or confidential information into a simulated security-base threat, the risk score of the recipient user may altered by seventeen (17) according to the metrics in FIG. 14. If the user risk calculator 212 determines that the user completed three (3) training sessions during a campaign based on the user training item data 134, a risk score of the user is altered by 5%. [0212] The calculated risk scores may be used to perform various actions. For example, the risk assessment manager 110 may use a calculated risk scores to influence, guide, and/or determine a frequency and/or sophistication level of future security items 112 and/or training items 124. For example, risk assessment manager 110 may increase the frequency of presenting security item 112 and/or training item 124 to a user who has a higher risk score than for a user with a lower risk score. [0222] 1512 may display the title 1514 of the campaign, the number of times 1516 each security item 112 and/or training item 124 in the campaign was sent; the number of times 1518 each of each security item 112 and/or training item 124 included a predefined action (e.g., open a message, click on a link, watch a video, attempt a password generation, and/or the like), a number of detected vulnerabilities 1520 (incorrect answers, incorrect interactions, and/or the like); the number of times 1522 each message resulted in a security compromise (e.g., recipient entered personal and/or confidential information, downloaded an insecure item, clicked on an insecure link, etc.); the number of multiple security compromises 1524 in each security item 112 and/or training item 124 for the same user (e.g., a user clicks on multiple insecure links, a user downloads multiple insecure items, a user answers multiple questions incorrectly, a combination of different security compromising actions, and/or the like) [0226] FIG. 17 shows another example of information that may be displayed to the user of security system 102 as part of the campaign summary and/or report. For example, FIG. 17 illustrates an overall risk score 1702 has been calculated for the client when compared to other clients subscribing to the risk assessment manager 110. A client's overall risk score may be based on the risk score associated with its employees. A client's overall risk score may be calculated based on the metrics discussed above with respect to FIG. 15 (e.g., open/interactions/vulnerable/trained/reported/compromised). [0231] FIG. 19 illustrates another report that may be presented to the user in the interactive environment 202. In the example shown in FIG. 19, a list of groups 1902 within the client may be displayed. This list may identify each group and the number of employees in each group. When a user selects a group from this list 1902, the data presenter 218 may display the name 1904 of the group; the number of users 1906 in the group; and the risk score 1908 of the group. The data presenter 218 may also display a list 1910 of each employee within the group. The employee's communication address 1912, first name 1914, last name 1916, the date 1918 the employee was added to the campaign, and risk score 1920 may also displayed to the user. The user may select one of the employees to view the statistics of the user for one or more campaigns or for all of the campaigns as a whole.) automatically update in real-time the one or more graphical representations of each of the one or more simulated phishing campaigns, while displayed in the … according to the schedule as each of the one or more simulated phishing campaigns progresses, a status of execution and metrics of the one or more simulated phishing campaigns as the one or more simulated phishing campaigns progresses with users clicking on the one or more links in the simulated phishing communications. (in at least [0068] The “Dashboard” widget 310 may link to and display data including summary information such as (but not limited to) a list of campaigns that have completed, a list of campaigns that are in progress, a list of campaigns that are scheduled for a future start date; performance data for one or more entities with respect to a given campaign and/or across multiple campaigns; performance data for one or more entities with respect to other entities for similar campaigns; and/or the like. Campaign similarity may be determined based on security item and/or training item sophistication scores, number of security items 112 and/or training items 124 presented to users of user device 104, 106 [0203] Item adjuster 218 may dynamically determine the type of security item 112 and/or training item 124 to be sent to the recipients based on a performance history with respect to previous security items 112 and/or training items 124 in the campaign and/or previous campaigns; a risk score; a role within the company; a company group; and/or the like. For example, if a user has successfully interacted with previous security items 112 and/or training items 124 at a given sophistication level, item adjuster 218 may dynamically update the campaign such that this user starts to receive security items 112 and/or training items 124 of a higher sophistication level (e.g., more difficult security-based questions, more legitimate looking simulated phishing messages, etc.). [0206] FIG. 13 shows examples of risk scoring metrics based on technical information and FIG. 14 shows examples of risk scoring metrics based on user security item interaction data 132 and user training item interaction data 134. A user risk score may be calculated at various granularities such as for each security item 112 and/or training item 124, a campaign in progress, for the most recent campaign completed, for all completed campaigns, and/or the like. [0207] The user risk calculator 212 may compare the security item interaction data 132, the training item interaction data 134, user property data 136, and/or user technical information 138 collected for a given user with the set of risk scoring metrics 126 to calculate a risk score for the user. Collected data may be from a campaign currently in progress, the most recent campaign completed, for all completed campaigns, and/or the like. [0208] if a user risk calculator 212 determines that the user opened a security-threat-based message in the campaign; clicked on a security-based threat in the campaign; and entered personal and/or confidential information into a simulated security-base threat, the risk score of the recipient user may altered by seventeen (17) according to the metrics in FIG. 14. If the user risk calculator 212 determines that the user completed three (3) training sessions during a campaign based on the user training item data 134, a risk score of the user is altered by 5%. [0221] The report area 1504 may present the user with a list of campaigns 1504 associated with one or more clients for which the user is authorized to view. The user also may be presented with one or more options for selecting which campaigns are displayed. For example, a filtering option 1506 may allow a user of security system 102 to enter dates/times, which results in only the campaigns matching these criteria to be displayed (or filtered out). Another filtering option 1508 may allow a user to select all of the campaigns that are currently pending, running, or completed to be displayed. [0222] the number of multiple security compromises 1524 in each security item 112 and/or training item 124 for the same user (e.g., a user clicks on multiple insecure links, a user downloads multiple insecure items, a user answers multiple questions incorrectly, a combination of different security compromising actions, and/or the like); the number of users 1526 considered to have been “trained” during the campaign; the number of times 1528 users reported an applicable security item 112 and/or training item 124 to an administrator, manager, etc.; the starting date 1530 of the campaign the stopping date of the campaign; the status 1532 of the campaign (e.g., pending, running, completed, etc.); the user 1534 who created the campaign; and/or the like. Each campaign may have different reporting items and the reporting items listed above. [0229] FIG. 17 illustrates a graph 1706 that may be displayed to show a client's risk score over time. In this example, the user may be able to select a temporal-based filter 1708 to see how a client's risk score changed on a minute, hourly, daily, weekly basis, and/or monthly basis. FIG. 17 also illustrates a time distribution 1710 of user interactions with security items 112 and/or training items 124 during the selected campaign. In this example, a time distribution 1710 may display a year's worth of data, each discrete division representing days and further months. As an example, various graphical features may be used to illustrate campaign reporting. For example, the darker the shading may indicate more interactions with security items 112 and/or training items 124 on a particular day. This may be expanded to view a Month/Week/Day view and allow a viewer to identify when users are more likely to interact with a security item 112 and/or training item 124 such as early morning, late at night, at home vs. at office, etc.) Although implied, Hawthorn does not expressly disclose the following limitations, which however, are taught by Dion, …based on the comparison…(in at least [0006] A health care professional can take a competency test on a particular topic and input his/her responses at the remote provider system. The health care professional's responses are evaluated, and an assessment of his/her skills displayed at the provider system. The assessment particularly points out those areas, if any, where the health care professional's knowledge is deficient. If the health care professional has any areas which need improvement, a list of relevant courses is also displayed at the provider system. The health care professional may then select a desired course from the user interface. [0060] administrator can by selecting allow student to challenge the present course by testing without completing the course, test out score with a box for the administrator to enter the minimum score acceptable if the student is to test out, course view before test?) …electronic calendar… (in at least [0061] FIG. 6 shows an event calendar interface screen for scheduling and reviewing events. A computer program product delivered to users and administrators is represented by a calendar/my events interface screen 199. The robustness of the education competency and compliance management method, system, and computer product is exhibited by the user choices for scheduling and tracking calendar events on a my events interface screen 199. The displayed required events are generated from the administrator grouping of an individual, individual specific items by administration, and items desired by the individual for advancement and enrichment. The navigation bar 153 shows the most recent navigation selection to be My Events. Though not shown in this figure the navigation area 151 is included with screen 199. Immediately below bar 153 is a screen title area 201 displaying “Registered Events for: user name here”. The user name has been omitted for discussion purposes. The system will display the events selected for the user name that has logged into the system. Below area 201 is an area 203 with instructions “These are the events for which you are currently registered. To view or register for additional events, go to the Event Calendar”…To the right of area 207 is an update button 209 that allows a user to update the current calendar. Just above button 209 is the spot 157 allows administrators to make changes to the present page instructions. The display metaphor of my events is a calendar 217 the currently selected month is displayed in an area 213, the current example displays “May 2007”. The user has a simple task to review past events by selecting a button 211 titled “<<Prior”. If the user wishes to view future commitments a button 215 titled “Next >>” [0064] Below area 245 is an area 249 displaying “Report Contents” on the left of the screen 247. To the right of area 249 is a button 259 displaying “Submit”. At the bottom of screen 247 is an area 2491 with a duplicate button 259 displaying “Submit”. The report contents are selected from area 251 displaying “Default Report Fields:”. Specifically to the right of area 251 is an area 253 with a list of options to be included as defaults in the intended report. These options include: User Name, Course, Test Pass Date, Expire Date, Taken?, Expired?, Passed?, and Score.) generate in an electronic calendar each of one or more graphical representations of each of the one or more simulated … campaigns according to the schedule, each of the one or more graphical representations selectable to display metrics of a corresponding simulated … campaign of the one or more … campaigns and configured to be updated in real-time as each of the one or more simulated … campaigns progresses (in at least [0006] If the health care professional has any areas which need improvement, a list of relevant courses is also displayed at the provider system. The health care professional may then select a desired course from the user interface. The machine readable media maintains a record of the health care professional's assessment as well as a list of completed courses. This information may then be provided to a licensing entity for credit. [0012] g. facilitating student access scheduling by a simple to use calendar for real world adjustments, h. evaluating student progress and performance with the ability to perform practice testing of course knowledge before actual testing, k. facilitating manipulating of test format and presentation by approved individuals within an organization to avoid inaccurate measurements by student answer memorization, specifically, questions can be randomized, remediated, and the administrator can select to show the student the score or not show the score depending on the specific test requirements, [0044] My Courses in Progress—This is a list of courses (both on-line and off-line education) the user has started but has not completed and passed. Once completed and passed, they will appear in the My Transcript section. [0061] FIG. 6 shows an event calendar interface screen for scheduling and reviewing events. A computer program product delivered to users and administrators is represented by a calendar/my events interface screen 199. The robustness of the education competency and compliance management method, system, and computer product is exhibited by the user choices for scheduling and tracking calendar events on a my events interface screen 199. The displayed required events are generated from the administrator grouping of an individual, individual specific items by administration, and items desired by the individual for advancement and enrichment. The navigation bar 153 shows the most recent navigation selection to be My Events. Though not shown in this figure the navigation area 151 is included with screen 199. Immediately below bar 153 is a screen title area 201 displaying “Registered Events for: user name here”. The user name has been omitted for discussion purposes. The system will display the events selected for the user name that has logged into the system. Below area 201 is an area 203 with instructions “These are the events for which you are currently registered. To view or register for additional events, go to the Event Calendar”…To the right of area 207 is an update button 209 that allows a user to update the current calendar. Just above button 209 is the spot 157 allows administrators to make changes to the present page instructions. The display metaphor of my events is a calendar 217 the currently selected month is displayed in an area 213, the current example displays “May 2007”. The user has a simple task to review past events by selecting a button 211 titled “<<Prior”. If the user wishes to view future commitments a button 215 titled “Next >>” [0064] Below area 245 is an area 249 displaying “Report Contents” on the left of the screen 247. To the right of area 249 is a button 259 displaying “Submit”. At the bottom of screen 247 is an area 2491 with a duplicate button 259 displaying “Submit”. The report contents are selected from area 251 displaying “Default Report Fields:”. Specifically to the right of area 251 is an area 253 with a list of options to be included as defaults in the intended report. These options include: User Name, Course, Test Pass Date, Expire Date, Taken?, Expired?, Passed?, and Score. [0073] FIG. 3 c shows the ease that a profile of an individual within an organization can be updated by the individual or administrator. FIG. 4 shows the simplicity of accessing an individual's transcript for authorized individuals. FIG. 5 shows the breath and flexibility of course list access for individuals and administrators. FIG. 5 b shows the robustness of the education competency and compliance management method, system, and computer product is exhibited by the user choices for courses, tests, and surveys functionality provided on the screen 300. FIG. 6 shows an event calendar interface screen for scheduling and reviewing events. The flexibility and ease of entry, tracking, and updating provide a user satisfying experience.) In the same field of invention, scheduling human trainings. At the time the invention was filed, it would have been obvious for one of ordinary skill in the art to have modified the teachings of Hawthorn, as taught by Dion above, with a reasonable expectation of success if arriving at the claimed invention. One of ordinary skill in the art would have been motivated to make this modification to the teachings of Hawthorn with the motivation of, …providing test performance feedback to students for improvement and advancement….providing test performance feedback to sponsoring organizations for assessment and improvement planning…providing test performance reports that are integral to the education competency and compliance management (avoiding third party software complications) to certifying or accreditation organizations…allowed to add to, delete, modify, and adapt course content and testing to keep an organization improving and growing through improved task performance… comprehensive approach to improve the user experience and simplify education compliance tasks…., as recited in Dion. As per Claim 2, Hawthorn teaches: The system of claim 1, wherein the one or more processors are further configured to display, responsive to selection of the one or more graphical representations, in a user interface a percentage of users of the entity who are phish-prone of the one or more simulated phishing campaign in comparison with a percentage of users of the other entities who are phish-prone. (in at least [0228] A client's risk score 1702 may be calculated in a real-time manner and/or according to various scripts that execute on a minute, hourly, daily, monthly, and/or a yearly basis. FIG. 17 illustrates a user may be displayed a list/graph of the risk scores 1704 for each group/department of a client. A group's risk score 1702 may be calculated based on the risk scores of the individuals within that group. For example, all of the risk scores of the individuals within a group may be added and/or the averaged to obtain the group's overall risk score 1702. A user of security system 102 may be able to select one or more of these groups to see a performance and/or technical information with respect to a given campaign, multiple campaigns, and/or all campaigns based on the group's employees. [0230] FIG. 18 illustrates a list/graph 1802 of risk scores for each employees, which may identify a company's riskiest and least risky employees. For example, a user may be able to select one or more of employees to see employee performance, property, and/or technical data with respect to a given campaign, multiple campaigns, and/or all campaigns participated in by the employee. FIG. 18 illustrates a graph 1804 that may be displayed to a user of security system 102 showing the client's risk score compare to other clients within a specific industry selected by the user. Graph 1804 may present the statistics displayed in the table 1514 discussed above for the client and for other clients in the selected industry. A user of security system 102 may be able to select the industry via one or more displayed options 1806 for which these metrics are displayed. [0231] FIG. 19 illustrates another report that may be presented to the user in the interactive environment 202. In the example shown in FIG. 19, a list of groups 1902 within the client may be displayed. This list may identify each group and the number of employees in each group. When a user selects a group from this list 1902, the data presenter 218 may display the name 1904 of the group; the number of users 1906 in the group; and the risk score 1908 of the group. The data presenter 218 may also display a list 1910 of each employee within the group. The employee's communication address 1912, first name 1914, last name 1916, the date 1918 the employee was added to the campaign, and risk score 1920 may also displayed to the user. The user may select one of the employees to view the statistics of the user for one or more campaigns or for all of the campaigns as a whole.) As per Claim 3, Hawthorn teaches: The system of claim 1, wherein the percentage of users of the entity who are phish-prone comprises a first number of users of the entity that clicked on the one or more links of the simulated phishing communications in relation to a second number of users of the entity that received the simulated phishing communications. (in at least [0208] if a user risk calculator 212 determines that the user opened a security-threat-based message in the campaign; clicked on a security-based threat in the campaign; and entered personal and/or confidential information into a simulated security-base threat, the risk score of the recipient user may altered by seventeen (17) according to the metrics in FIG. 14. If the user risk calculator 212 determines that the user completed three (3) training sessions during a campaign based on the user training item data 134, a risk score of the user is altered by 5%. [0229] FIG. 17 illustrates a graph 1706 that may be displayed to show a client's risk score over time. In this example, the user may be able to select a temporal-based filter 1708 to see how a client's risk score changed on a minute, hourly, daily, weekly basis, and/or monthly basis. FIG. 17 also illustrates a time distribution 1710 of user interactions with security items 112 and/or training items 124 during the selected campaign. In this example, a time distribution 1710 may display a year's worth of data, each discrete division representing days and further months. As an example, various graphical features may be used to illustrate campaign reporting. For example, the darker the shading may indicate more interactions with security items 112 and/or training items 124 on a particular day. This may be expanded to view a Month/Week/Day view and allow a viewer to identify when users are more likely to interact with a security item 112 and/or training item 124 such as early morning, late at night, at home vs. at office, etc. [0230] FIG. 18 illustrates a list/graph 1802 of risk scores for each employees, which may identify a company's riskiest and least risky employees. For example, a user may be able to select one or more of employees to see employee performance, property, and/or technical data with respect to a given campaign, multiple campaigns, and/or all campaigns participated in by the employee. FIG. 18 illustrates a graph 1804 that may be displayed to a user of security system 102 showing the client's risk score compare to other clients within a specific industry selected by the user. Graph 1804 may present the statistics displayed in the table 1514 discussed above for the client and for other clients in the selected industry. A user of security system 102 may be able to select the industry via one or more displayed options 1806 for which these metrics are displayed. [0231] FIG. 19 illustrates another report that may be presented to the user in the interactive environment 202. In the example shown in FIG. 19, a list of groups 1902 within the client may be displayed. This list may identify each group and the number of employees in each group. When a user selects a group from this list 1902, the data presenter 218 may display the name 1904 of the group; the number of users 1906 in the group; and the risk score 1908 of the group. The data presenter 218 may also display a list 1910 of each employee within the group. The employee's communication address 1912, first name 1914, last name 1916, the date 1918 the employee was added to the campaign, and risk score 1920 may also displayed to the user. The user may select one of the employees to view the statistics of the user for one or more campaigns or for all of the campaigns as a whole.) As per Claim 4, Hawthorn teaches: The system of claim 1, wherein the one or more processors are further configured to execute a second one or more simulated phishing campaigns subsequent to the one or more simulated phishing campaigns based at least on the results of the one or more simulated phishing campaigns. (in at least [0036] performing an initial risk assessment by transmitting a security item and/or a training item from a security system to a user system to obtain response data associated with the transmitted security item and/or training item. Response data may be used to calculate an initial risk score associated with a specific user. Subsequent security item and/or training item may be transmitted to a user system, where the subsequent security item and/or training item is determined based on the risk score associated with a user. Interactions via a user system with subsequent security items and/or training items may result in subsequent response data that may be transmitted to security system where a user's risk score may be updated and/or recalculated based on the subsequent response data. [0124] A campaign may be configured to send different security items 112 and/or training items 124 to different recipients based on a performance history, role, associated group, risk score, and/or the like. For example, a rule may be defined by a user of security system 102 such that if a recipient at user system 104, 106 performs in a previous campaign such that the user is proficient/trained in a particular security item 112 and/or training item 124, security items 112 and/or training items 124 for a subsequent campaign may selected based on the sophistication level of the previous campaign and/or a current risk score of a user of user system 104, 106. [0136] The risk assessment manager 110 may use these inputs to calculate a user risk score. This user risk score may provide an organization with a quantified indication as to the level of risk a given user exposes the organization to with respect to the security of its computing networks. The user risk score may be used to influence, guide, and /or determine the frequency and sophistication level of future campaigns, security items and/or training items 124. [0212] The calculated risk scores may be used to perform various actions. For example, the risk assessment manager 110 may use a calculated risk scores to influence, guide, and/or determine a frequency and/or sophistication level of future security items 112 and/or training items 124. For example, risk assessment manager 110 may increase the frequency of presenting security item 112 and/or training item 124 to a user who has a higher risk score than for a user with a lower risk score.) As per Claim 5, Hawthorn teaches: The system of claim 1, wherein the one or more processors are further configured to determine, based on the comparison, electronic based training for users of the entity. (in at least [0036] performing an initial risk assessment by transmitting a security item and/or a training item from a security system to a user system to obtain response data associated with the transmitted security item and/or training item. Response data may be used to calculate an initial risk score associated with a specific user. Subsequent security item and/or training item may be transmitted to a user system, where the subsequent security item and/or training item is determined based on the risk score associated with a user. Interactions via a user system with subsequent security items and/or training items may result in subsequent response data that may be transmitted to security system where a user's risk score may be updated and/or recalculated based on the subsequent response data. [0212] The calculated risk scores may be used to perform various actions. For example, the risk assessment manager 110 may use a calculated risk scores to influence, guide, and/or determine a frequency and/or sophistication level of future security items 112 and/or training items 124. For example, risk assessment manager 110 may increase the frequency of presenting security item 112 and/or training item 124 to a user who has a higher risk score than for a user with a lower risk score. In another example, the security items 112 and/or training items 124 with a higher sophistication level may be presented to a user with a lower risk score than to a user with a higher risk score. In another example, a more in-depth and detailed training item 124 or additional training items 124 may be presented to a user with a higher risk score than to a user with a lower risk score. As the user completes additional training sessions a risk score may be reduced. [0223] A user of security system 102 may be able to select one or more of the campaigns displayed in the table 1512 to view 1536 their details, delete 1538 the selected campaigns, clone 1540 the selected campaigns, compare 1542 multiple selected campaigns, and/or the like. In one example, campaigns may be compared based on any metrics discussed above. In addition, the risk scores of all users within an organization may be combined to calculate an overall risk score of the company. Trending data may then be displayed across multiple campaigns, against industry vertical, and/or across all clients of the risk assessment manager 110. [0124] A campaign may be configured to send different security items 112 and/or training items 124 to different recipients based on a performance history, role, associated group, risk score, and/or the like. For example, a rule may be defined by a user of security system 102 such that if a recipient at user system 104, 106 performs in a previous campaign such that the user is proficient/trained in a particular security item 112 and/or training item 124, security items 112 and/or training items 124 for a subsequent campaign may selected based on the sophistication level of the previous campaign and/or a current risk score of a user of user system 104, 106. [0227] A weighted score may be applied to each interaction between a user and a security item 112 and/or training item 124; whether that user is a repeat offender; whether that user interacts with security items 112 and/or training items 124 from different devices (laptop/tablet/phone) or multiple source IP addresses (work/home); whether that user interacted with security items 112 and/or training items 124 from vulnerable devices (out of date browser/plugins); whether that user completes training or reports applicable security items 112 and/or training items 124; and/or the like. Each interaction may be scored, and the aggregated scores may be normalized. The normalized scores may compared using a standard deviation calculation to arrive at a “ThreatScore”. This ThreatScore may be compared against industry vertical or overall, and may be used to see trending data for users/groups/company (improving/declining) over time. [0228] A client's risk score 1702 may be calculated in a real-time manner and/or according to various scripts that execute on a minute, hourly, daily, monthly, and/or a yearly basis. FIG. 17 illustrates a user may be displayed a list/graph of the risk scores 1704 for each group/department of a client. A group's risk score 1702 may be calculated based on the risk scores of the individuals within that group. For example, all of the risk scores of the individuals within a group may be added and/or the averaged to obtain the group's overall risk score 1702. A user of security system 102 may be able to select one or more of these groups to see a performance and/or technical information with respect to a given campaign, multiple campaigns, and/or all campaigns based on the group's employees.) As per Claim 6, Hawthorn teaches: The system of claim 5, wherein the one or more processors are further configured to execute the electronic based training to those users identified as phish-prone. (in at least [0124] A campaign may be configured to send different security items 112 and/or training items 124 to different recipients based on a performance history, role, associated group, risk score, and/or the like. For example, a rule may be defined by a user of security system 102 such that if a recipient at user system 104, 106 performs in a previous campaign such that the user is proficient/trained in a particular security item 112 and/or training item 124, security items 112 and/or training items 124 for a subsequent campaign may selected based on the sophistication level of the previous campaign and/or a current risk score of a user of user system 104, 106. [0212] The calculated risk scores may be used to perform various actions. For example, the risk assessment manager 110 may use a calculated risk scores to influence, guide, and/or determine a frequency and/or sophistication level of future security items 112 and/or training items 124. For example, risk assessment manager 110 may increase the frequency of presenting security item 112 and/or training item 124 to a user who has a higher risk score than for a user with a lower risk score. In another example, the security items 112 and/or training items 124 with a higher sophistication level may be presented to a user with a lower risk score than to a user with a higher risk score. In another example, a more in-depth and detailed training item 124 or additional training items 124 may be presented to a user with a higher risk score than to a user with a lower risk score. As the user completes additional training sessions a risk score may be reduced. [0214] a context-aware cybersecurity training system may determine that a user is at risk. The system may include a dataset of associations between threat scenarios and associated actions that may indicate that a person who performs the actions may be at risk for the threat scenario. These associations may provide a basis for a training needs model (See FIG. 24). For example, threat scenarios and associated actions may include: (i) downloading malware from a malicious USB memory device; (ii) downloading a malicious app one one's smartphone; (iii) connecting to a rogue Wi-Fi access point; (iv) falling for a malicious SMS message by providing sensitive information in response to such a message or by performing, in response to the SMS message, some other action that puts one or one's organization at risk (e.g., changing a password as instructed in a malicious SMS message); and/or (iv) falling prey to a bluesnarfing attack resulting in the theft of sensitive information. [0217] a partial training needs model based on simple threshold levels is illustrated in FIG. 24. The model may associate various threat scenarios 2020 with various user actions 2030 that may be detected. When the system determines that a user action 2030 has been taken at least a threshold number of times 3010 in response to the threat scenario, the model will identify one or more training needs 3020 that should be provided to the user, optionally with priorities for the training needs. For instance, a user who replies to a mock malicious SMS message from his smartphone is identified as being at a high risk of falling for such an attack. The training needs model associated with this particular threat scenario based on this particular combination of contextual attributes (in this case simply the fact that the user replied to an SMS message from an unknown sender) indicates that the user is in a high need for being trained in the area of messaging and smart phone security, the identified training needs 3020 associated with this particular threat scenario as identified for this particular user in this particular context. [0239] At block 2106, the risk assessment manager 110 may transmit the security item 112 and/or training item 124 to at least one target user. At block 2108, the risk assessment manager 110 and/or risk assessment agent 142 may determine if the target user performs a predefined security item interaction and/or training item interaction that indicates a security vulnerability of the user. If this determination is positive, the risk assessment manager 110, at block 2110, may assess the security of the user's device 104 and/or user's properties and add the details of this assessment to the user's technical details and/or profile details in a profile 118. The risk assessment manager 110, at block 2112, may also record the user's security item interaction data and/or training item interaction data and add this action/behavior to the user's behavior details in a profile 118.) As per Claim 7, Hawthorn teaches: The system of claim 1, wherein the one or more processors are further configured to receive attributes of the entity via a user interface presented via a display. (in at least [0111] Campaign manager 204 may populate and/or save the target user options 722 with group and employee information based on client and employee profiles. The campaign manager 204 may analyze the client profiles 120 to identify the various groups associated with the client and also analyze the employee profiles 122 to identify the employees of the client and the groups of the client associated with the employees. FIG. 8 illustrates examples of client (e.g., an entity utilizing the risk assessment manager 110) profiles and FIG. 9 shows examples of employee profiles. In the example shown in FIG. 8, each row 802, 804, 806 in the table 800 may correspond to a client profile. Each profile 802, 804, 806 also may store separate from one another [0115] FIG. 9 illustrates an employee profile 118. Table 900 may include a first column 908 having the employee ID of the employee associated with the profile 118; a second column 910 entitled “Communication Address;” a third column 912 entitled “Client ID;” a fourth column 914 entitled “Campaign;” a fifth column 916 entitled “Security Item;” a sixth column 918 entitled “Action;” a seventh column 920 entitled “System Attributes;” and/or an eighth column 922 entitled “Statistics.” These columns of table 900 are exemplary. Table 900 may include additional columns with various data. The “Employee ID” column 908 may include entries 919 identifying an employee associated with the employee profile. This column 908 may also include an entry 923 identifying the role of the employee within the client/company, and an entry 925 identifying the group within the client/company that the employee is a part of. The “Client ID” column 910 may include entries 924 identifying a client that the employee works for. [0138] Once a campaign has been started, the security item generator 206 may analyze the profile 122 of the campaign to identify users and/or user systems 104, 106 who are to be presented with security items 112 and/or training items 124 as part of the campaign. For example, the item generator 206 may analyze the “Target Users” entry 1016 of the profile 122 and identify a user group (finance, marketing, legal, etc.), individual user IDs, individual communication addresses (email addresses, instant messaging addresses, phone number, etc.), and/or the like. If a user group is provided, the security item generator 206 may analyze employee profiles 118 to identify employees associated with the campaign belonging to the identified group. [0220] FIG. 1, users of security system 102 may be able to view various types of reports 128 for a client subscribing to the risk assessment manager 110. A user of security system 102 may access an interactive environment 202 and selects a “Reports” widget 314 as indicated by the dashed box 1502 in FIG. 15. When a user selects this widget 314, a report area 1502 of the interactive environment 202 may be displayed within a portion of the interactive environment 202. [0232] Statistics also may calculated on the data collected during one or more campaigns. For example, collected data may be analyzed and compared to data available from one or more data brokers. Accordingly, risk assessment manager 110 may predict if someone is more susceptible to security-threat-based messages based on their demographic data. For example, if an analysis of the data shows that people who shop at given store and drive a red van are more likely to interact with a security item 112 and/or training item 124 in a compromising manner, risk assessment manager 110 may score a user as more risky before they are sent a security-threat-based message.) As per Claim 8, Hawthorn teaches: The system of claim 1, wherein the one or more processors are further configured to compare the attributes for the entity to attributes of other entities that share at least one of the attributes. (in at least [0226] FIG. 17 shows another example of information that may be displayed to the user of security system 102 as part of the campaign summary and/or report. For example, FIG. 17 illustrates an overall risk score 1702 has been calculated for the client when compared to other clients subscribing to the risk assessment manager 110. A client's overall risk score may be based on the risk score associated with its employees. A client's overall risk score may be calculated based on the metrics discussed above with respect to FIG. 15 (e.g., open/interactions/vulnerable/trained/reported/compromised). [0228] A client's risk score 1702 may be calculated in a real-time manner and/or according to various scripts that execute on a minute, hourly, daily, monthly, and/or a yearly basis. FIG. 17 illustrates a user may be displayed a list/graph of the risk scores 1704 for each group/department of a client. A group's risk score 1702 may be calculated based on the risk scores of the individuals within that group. For example, all of the risk scores of the individuals within a group may be added and/or the averaged to obtain the group's overall risk score 1702. A user of security system 102 may be able to select one or more of these groups to see a performance and/or technical information with respect to a given campaign, multiple campaigns, and/or all campaigns based on the group's employees. [0230] FIG. 18 illustrates a list/graph 1802 of risk scores for each employees, which may identify a company's riskiest and least risky employees. For example, a user may be able to select one or more of employees to see employee performance, property, and/or technical data with respect to a given campaign, multiple campaigns, and/or all campaigns participated in by the employee. FIG. 18 illustrates a graph 1804 that may be displayed to a user of security system 102 showing the client's risk score compare to other clients within a specific industry selected by the user. Graph 1804 may present the statistics displayed in the table 1514 discussed above for the client and for other clients in the selected industry. A user of security system 102 may be able to select the industry via one or more displayed options 1806 for which these metrics are displayed. [0232] Statistics also may calculated on the data collected during one or more campaigns. For example, collected data may be analyzed and compared to data available from one or more data brokers. Accordingly, risk assessment manager 110 may predict if someone is more susceptible to security-threat-based messages based on their demographic data. For example, if an analysis of the data shows that people who shop at given store and drive a red van are more likely to interact with a security item 112 and/or training item 124 in a compromising manner, risk assessment manager 110 may score a user as more risky before they are sent a security-threat-based message.) As per Claim 9, Hawthorn teaches: The system of claim 1, wherein the one or more processors are further configured to determine, based on a comparison of the percentage of users of the entity who are phish- prone to users of one or more other entities that share at least one of the attributes, the configuration of the one or more simulated phishing campaigns. (in at least [0108] A first target user option 714 may allow a user to select users of user system 104, 106, groups (e.g., finance group, marketing group, information technology group, legal group, intern group, etc.) within the entity associated with the campaign, and/or an entire entity. Each groups may include one or more users of user system 104, 106 to receive a security item 112 and/or training item 124 generated based on the template(s) of the campaign and/or manually generated. [0124] A campaign may be configured to send different security items 112 and/or training items 124 to different recipients based on a performance history, role, associated group, risk score, and/or the like. [0214] The system may include a dataset of possible threat scenarios for which a context-aware cybersecurity training system may determine that a user is at risk. The system may include a dataset of associations between threat scenarios and associated actions that may indicate that a person who performs the actions may be at risk for the threat scenario. These associations may provide a basis for a training needs model (See FIG. 24). [0216] the system may receive sensed action data for multiple users and store that data in correlation with relevant attributes of the data, such as identifying information for the multiple users (e.g., unique device IDs for different users), a date of the action or actions taken by the users, in a data set such as a user profile, user behavior data set, or historical user training data set, where the data may be organized by users. [0217] a partial training needs model based on simple threshold levels is illustrated in FIG. 24. The model may associate various threat scenarios 2020 with various user actions 2030 that may be detected. When the system determines that a user action 2030 has been taken at least a threshold number of times 3010 in response to the threat scenario, the model will identify one or more training needs 3020 that should be provided to the user, optionally with priorities for the training needs. [0228] A client's risk score 1702 may be calculated in a real-time manner and/or according to various scripts that execute on a minute, hourly, daily, monthly, and/or a yearly basis. FIG. 17 illustrates a user may be displayed a list/graph of the risk scores 1704 for each group/department of a client. A group's risk score 1702 may be calculated based on the risk scores of the individuals within that group. For example, all of the risk scores of the individuals within a group may be added and/or the averaged to obtain the group's overall risk score 1702. A user of security system 102 may be able to select one or more of these groups to see a performance and/or technical information with respect to a given campaign, multiple campaigns, and/or all campaigns based on the group's employees. [0232] Statistics also may calculated on the data collected during one or more campaigns. For example, collected data may be analyzed and compared to data available from one or more data brokers. Accordingly, risk assessment manager 110 may predict if someone is more susceptible to security-threat-based messages based on their demographic data. For example, if an analysis of the data shows that people who shop at given store and drive a red van are more likely to interact with a security item 112 and/or training item 124 in a compromising manner, risk assessment manager 110 may score a user as more risky before they are sent a security-threat-based message.) As per Claim 10, Hawthorn teaches: The system of claim 1, wherein the one or more processors are further configured to generate in the …, according to the schedule, the one or more graphical representations of each of the one or more simulated phishing campaigns and an electronic based training. (in at least [0117] FIG. 7, a campaign manager 206 may present campaign delivery options 722 via the interactive environment 202. Campaign delivery options 722 may be displayed within the campaign area 324, within a new window, and/or the like. The campaign delivery options 722 may allow a user of security system 102 to configure delivery parameters associated with a campaign. A first delivery option 714 may schedule a campaign for immediate delivery. For example, as soon as the user of security system 102 finalizes and saves a campaign the security item generator 206 may automatically generate a security item 112 and/or training item 124 to be transmitted to the designated recipients at user system 104, 106 based on a template included 114 in the campaign. A second delivery option 716 may allow a user of security system 102 to enter a starting date and/or time and/or an ending date and/or time. When the specified start date and/or time occurs, the security item generator 206 may automatically generate and transmit a security item 112 and/or training item 124 to be transmitted to designated recipients at user system 104, 106 based on at least a template included 114 in the campaign. A third delivery option 718 may allow a user of security system 102 to select a staggered delivery of the campaign. When a user of security system 102 specifies an end date and/or time 720 for delivery, campaign generation and delivery may occur until that date and/or time. [0225] The campaign summary 1602 may also provide campaign statistics to the user in one or more different formats. For example, a campaign summary 1602 may include a graph 1618 displaying the statistics displayed in the table 1514 discussed above with respect to FIG. 15. It should be noted that the campaign statistics are not limited to those shown in FIG. 16. [0226] FIG. 17 shows another example of information that may be displayed to the user of security system 102 as part of the campaign summary and/or report. For example, FIG. 17 illustrates an overall risk score 1702 has been calculated for the client when compared to other clients subscribing to the risk assessment manager 110. A client's overall risk score may be based on the risk score associated with its employees. A client's overall risk score may be calculated based on the metrics discussed above with respect to FIG. 15 (e.g., open/interactions/vulnerable/trained/reported/compromised). [0229] FIG. 17 illustrates a graph 1706 that may be displayed to show a client's risk score over time. In this example, the user may be able to select a temporal-based filter 1708 to see how a client's risk score changed on a minute, hourly, daily, weekly basis, and/or monthly basis. FIG. 17 also illustrates a time distribution 1710 of user interactions with security items 112 and/or training items 124 during the selected campaign. In this example, a time distribution 1710 may display a year's worth of data, each discrete division representing days and further months. As an example, various graphical features may be used to illustrate campaign reporting. For example, the darker the shading may indicate more interactions with security items 112 and/or training items 124 on a particular day. This may be expanded to view a Month/Week/Day view and allow a viewer to identify when users are more likely to interact with a security item 112 and/or training item 124 such as early morning, late at night, at home vs. at office, etc.) Although implied, Hawthorn does not expressly disclose the following limitations, which however, are taught by Dion, …electronic calendar… (in at least [0061] FIG. 6 shows an event calendar interface screen for scheduling and reviewing events. A computer program product delivered to users and administrators is represented by a calendar/my events interface screen 199. The robustness of the education competency and compliance management method, system, and computer product is exhibited by the user choices for scheduling and tracking calendar events on a my events interface screen 199. The displayed required events are generated from the administrator grouping of an individual, individual specific items by administration, and items desired by the individual for advancement and enrichment. The navigation bar 153 shows the most recent navigation selection to be My Events. Though not shown in this figure the navigation area 151 is included with screen 199. Immediately below bar 153 is a screen title area 201 displaying “Registered Events for: user name here”. The user name has been omitted for discussion purposes. The system will display the events selected for the user name that has logged into the system. Below area 201 is an area 203 with instructions “These are the events for which you are currently registered. To view or register for additional events, go to the Event Calendar”.) The reason and rationale to combine Hawthorn and Dion is the same as recited above. As per Claim 11-20 for a method (see at least Hawthorn [0036]), substantially recite the subject matter of Claim 1-10 and are rejected based on the same reasoning and rationale. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PO HAN MAX LEE whose telephone number is (571) 272-3821. The examiner can normally be reached on Mon-Thurs 8:00 am - 7:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rutao Wu can be reached on (571) 272-6045. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PO HAN LEE/Primary Examiner, Art Unit 3623
Read full office action

Prosecution Timeline

Mar 02, 2023
Application Filed
Mar 21, 2025
Non-Final Rejection — §101, §103, §DP
Jun 26, 2025
Response Filed
Aug 23, 2025
Final Rejection — §101, §103, §DP
Oct 27, 2025
Response after Non-Final Action
Nov 17, 2025
Request for Continued Examination
Nov 25, 2025
Response after Non-Final Action
Jan 26, 2026
Non-Final Rejection — §101, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602629
USING MACHINE LEARNING TO PREDICT FLEET MOVES IN HYDRAULIC FRACTURING OPERATIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12548089
OPTIMIZATION OF HYBRID GROWING INFRASTRUCTURE FOR DIFFERENT WEATHER PROFILES AND MARKET CONDITIONS
2y 5m to grant Granted Feb 10, 2026
Patent 12548046
SYSTEM FOR ACCURATE PREDICTIONS USING A PREDICTIVE MODEL
2y 5m to grant Granted Feb 10, 2026
Patent 12547241
SYSTEMS AND METHODS FOR COMPUTER-IMPLEMENTED SURVEYS
2y 5m to grant Granted Feb 10, 2026
Patent 12361363
METHOD AND SYSTEM FOR PROFICIENCY IDENTIFICATION
2y 5m to grant Granted Jul 15, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
32%
Grant Probability
74%
With Interview (+41.2%)
3y 6m
Median Time to Grant
High
PTA Risk
Based on 158 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month