Prosecution Insights
Last updated: April 19, 2026
Application No. 18/791,731

METHOD AND SYSTEM FOR CYBERSECURITY INCIDENT RESOLUTION

Non-Final OA §101§102§103
Filed
Aug 01, 2024
Examiner
PUJOLS-CRUZ, MARJORIE
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Cognitive Security Inc.
OA Round
1 (Non-Final)
18%
Grant Probability
At Risk
1-2
OA Rounds
3y 2m
To Grant
46%
With Interview

Examiner Intelligence

Grants only 18% of cases
18%
Career Allow Rate
25 granted / 136 resolved
-33.6% vs TC avg
Strong +28% interview lift
Without
With
+27.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
50 currently pending
Career history
186
Total Applications
across all art units

Statute-Specific Performance

§101
38.7%
-1.3% vs TC avg
§103
43.3%
+3.3% vs TC avg
§102
9.4%
-30.6% vs TC avg
§112
6.6%
-33.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 136 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION This communication is a Non-Final Office Action rejection on the merits. Claims 1-16 are currently pending and have been addressed below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without reciting significantly more. Independent Claim 1 Step One - First, pursuant to step 1 in the January 2019 Revised Patent Subject Matter Eligibility Guidance (“2019 PEG”) on 84 Fed. Reg. 53, the claim 1 is directed to a method which is a statutory category. Step 2A, Prong One - Claim 1 recites: A method comprising: communicating training data to in situ train a user on how to identify potential security incidents; tracking performance of the user based on the training data; and storing a profile of the user, the profile indicating a level of performance of the user. These claim elements are considered to be abstract ideas because they are directed to “certain methods of organizing human activity” which include “managing personal behavior.” In this case, using historical data to create a user profile indicating a level of performance is considered a managing personal behavior activity (e.g., following rules or instructions). If a claim limitation, under its broadest reasonable interpretation, covers managing personal behavior, then it falls within the “certain methods of organizing human activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong 2 - The judicial exception is not integrated into a practical application. Claim 1 includes additional elements: a computing device; and a database. The computing device is merely used to communicate training to in situ train a user on how to identify potential security incidents (Paragraph 0026). The database is merely used to store a profile of the user (Paragraph 0026). Merely stating that the step is performed by a computer component results in “apply it” on a computer (MPEP 2106.05f). These elements of “computing device” and “database” are recited at a high level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer element. Also, the computer device is considered “field of use” since it’s just used to provide information for evaluating a user, but the technology is not improved (MPEP 2106.05h). Accordingly, alone and in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore, the claim is directed to an abstract idea. Step 2B - The claim does not include additional elements that are sufficient to amount significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the claims describe how to generally “apply” the concept of tracking performance of the user based on the training data. The specification shows that the computing device is merely used to communicate training to in situ train a user on how to identify potential security incidents (Paragraph 0026). The database is merely used to store a profile of the user (Paragraph 0026). Merely stating that the step is performed by a computer component results in “apply it” on a computer (MPEP 2106.05f). Also, the functions of “communicating” and “storing” are considered a well-understood, routing, and conventional function since they’re just “receiving or transmitting data over a network” and “storing information in a memory” (MPEP 2106.05(d)). Thus, nothing in the claim adds significantly more to the abstract idea. The claim is ineligible. Independent claim 8 is directed to a system at step 1, which is a statutory category. Claim 8 recites similar limitations as claim 1 and is rejected for the same reasons at step 2a, prong one; step 2a, prong 2; and step 2b. Claim 8 further recites: a processor – which is treated as just an explicit “processor/computer” for executing the operations and is treated under MPEP 2106.05f in the same manner as claim 1. Claim 8 further recites: a communications module; and a computer network – which are merely used to receive data from all of the computing devices (Paragraph 0109). Accordingly, these limitations are viewed as “apply it on a computer” at step 2a, prong 2 and step 2b and also “field of use” to extent it states that it receives information from a plurality of users. Thus, the claim is ineligible. Independent claim 15 is directed to an article of manufacture at step 1, which is a statutory category. Claim 15 recites similar limitations as claim 1 and is rejected for the same reasons at step 2a, prong one; step 2a, prong 2; and step 2b. Claim 15 further recites: a processor; and a non-transitory computer-readable storage medium – which are treated as just an explicit “processor/computer” for storing and executing the operations and are treated under MPEP 2106.05f in the same manner as claim 1. Accordingly, these limitations are viewed as “apply it on a computer” at step 2a, prong 2 and step 2b. The claim is not patent eligible. Dependent claims 2-3, 7, 9-10, 14 and 16 are not directed to any additional claim elements. Rather, these claims offer further descriptive functions of elements found in the independent claims - such as wherein the computer device is used to: receive an indication of a potential security incident; identify a user; retrieve the profile of the user; perform action on the on the potential security incident; receive an indication of a potential security incident from another user; and take immediate action on the potential security incident. Merely stating that the step is performed by a computer component results in “apply it” on a computer (MPEP 2106.05f) being applicable at both Step 2A, Prong 2 and Step 2B. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. In this case, the functions of “receiving an incident” and “retrieving a profile” are considered a well-understood, routing, and conventional function since they’re just “receiving or transmitting data over a network” (MPEP 2106.05(d)). Also, merely “halting communication or automatic quarantine, etc.” does not impose any meaningful limits on the judicial exception, which results in an insignificant extra-solution activity (MPEP 2106.05(g)). Thus, nothing in the claim adds significantly more to the abstract idea. The claim is ineligible. Dependent claims 4-6 and 11-13 are not directed to any additional claim elements. Rather, these claims offer further descriptive functions of elements found in the independent claims - such as wherein the computer device is used to: place the security incident in a position of the queue based on the profile of the user; place the security incident at the top position of the queue when the profile of the user indicates high performance of the user; and place the security incident at the top position of the queue when the profile of the user indicates low performance of the user. Merely stating that the step is performed by a computer component results in “apply it” on a computer (MPEP 2106.05f) being applicable at both Step 2A, Prong 2 and Step 2B. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. In this case, “receiving information from a user profile” is just “mere data gathering” to use it for a prioritization analysis (MPEP 2106.05g). Thus, nothing in the claim adds significantly more to the abstract idea. The claim is ineligible. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-3, 8-10, and 15-16 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Hawthorn et al. (US 2017/0244746 A1). Regarding claim 1, Hawthorn et al. discloses a method comprising (Paragraph 0007, Various example embodiments include systems and methods for assessing security risks of users in computing networks. Additionally, a system and method in accordance with example embodiments may generate an interaction item for training or security, and it may send the interaction item to a user of an end user electronic device): communicating training data to a computing device to in situ train a user on how to identify potential security incidents (Paragraph 0054, Training items 124 may present an end user associated with user system 104, 106 with training data associated with user and/or network-based security scenarios. Training items 124 may include audio/video data, tests, quizzes, questionnaires, interactive applications, scenario-based challenge/response applications, and/or the like to obtain feedback from an end user using user system 104, 106 regarding knowledge and/or proficiency associate with user and/or network-based security issues. Feedback and/or responses to security items 112 and/or training items 124 may be received and stored as security item interaction data 132 and/or training item interaction data 134, respectively; Paragraph 0056, Input/output module 140 may include for example, I/O devices, which may be configured to provide input and/or output to user system); tracking performance of the user based on the training data (Paragraph 0054, Security item interaction data 132 and/or training item interaction data 134 may be used to generate an initial risk score for an end user, a group of end users, and/or an organization. Security item interaction data 132 and/or training item interaction data 134 may be used to update a risk score for an end user, a group of end users, and/or an organization. Security item interaction data 132 and/or training item interaction data 134 may be used to determine a sophistication level associated with subsequently transmitted security items 112 and/or training items 124 as well as the frequency of future occurrence for each end user based on the end user's score; Examiner interprets the “risk score” as the “performance of the user”); and storing a profile of the user in a database, the profile indicating a level of performance of the user (Paragraph 0115, As discussed above, FIG. 9 illustrates an employee profile 118; Paragraph 0116, The “Score” column 921 may include entries 938 identifying the employee's risk score; Paragraph 0211, Once a risk score has been determined for a given recipient user, a risk score may be saved and/or stored within the user profile 118 associated with the recipient user. Risk scores may be stored in other profiles as well. If a previous risk score is already associated with the user, this previous score may be updated with the new score; Paragraph 0136, This user risk score may provide an organization with a quantified indication as to the level of risk a given user exposes the organization to with respect to the security of its computing networks). Regarding claim 8, Hawthorn et al. discloses a method comprising (Paragraph 0007, Various example embodiments include systems and methods for assessing security risks of users in computing networks. Additionally, a system and method in accordance with example embodiments may generate an interaction item for training or security, and it may send the interaction item to a user of an end user electronic device): at least one processor; a communications module, coupled to the at least one processor, for communicating with one or more computer networks; and a memory coupled to the at least one processor and storing instructions that, when executed by the at least one processor, cause the at least one processor to (Paragraph 0045, he user system 104, 106 and/or security system 102 may further include, for example, a processor, which may be several processors, a single processor, or a single device having multiple processors. The user system 104, 106 and/or security system 102 may access and be communicatively coupled to the network 108. The user system 104, 106 and/or security system 102 may store information in various electronic storage media, such as, for example, a database (not shown); Paragraph 0248, The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention): communicate training data to a computing device to in situ train a user on how to identify potential security incidents (Paragraph 0054, Training items 124 may present an end user associated with user system 104, 106 with training data associated with user and/or network-based security scenarios. Training items 124 may include audio/video data, tests, quizzes, questionnaires, interactive applications, scenario-based challenge/response applications, and/or the like to obtain feedback from an end user using user system 104, 106 regarding knowledge and/or proficiency associate with user and/or network-based security issues. Feedback and/or responses to security items 112 and/or training items 124 may be received and stored as security item interaction data 132 and/or training item interaction data 134, respectively; Paragraph 0056, Input/output module 140 may include for example, I/O devices, which may be configured to provide input and/or output to user system); track performance of the user based on the training data (Paragraph 0054, Security item interaction data 132 and/or training item interaction data 134 may be used to generate an initial risk score for an end user, a group of end users, and/or an organization. Security item interaction data 132 and/or training item interaction data 134 may be used to update a risk score for an end user, a group of end users, and/or an organization. Security item interaction data 132 and/or training item interaction data 134 may be used to determine a sophistication level associated with subsequently transmitted security items 112 and/or training items 124 as well as the frequency of future occurrence for each end user based on the end user's score; Examiner interprets the “risk score” as the “performance of the user”); and store a profile of the user in a database, the profile indicating a level of performance of the user (Paragraph 0115, As discussed above, FIG. 9 illustrates an employee profile 118; Paragraph 0116, The “Score” column 921 may include entries 938 identifying the employee's risk score; Paragraph 0211, Once a risk score has been determined for a given recipient user, a risk score may be saved and/or stored within the user profile 118 associated with the recipient user. Risk scores may be stored in other profiles as well. If a previous risk score is already associated with the user, this previous score may be updated with the new score; Paragraph 0136, This user risk score may provide an organization with a quantified indication as to the level of risk a given user exposes the organization to with respect to the security of its computing networks). Regarding claim 15, Hawthorn et al. discloses a non-transitory computer-readable storage medium storing instructions that, when executed by at least one processor of a computer system, cause the computer system to (Paragraph 0007, Various example embodiments include systems and methods for assessing security risks of users in computing networks. Additionally, a system and method in accordance with example embodiments may generate an interaction item for training or security, and it may send the interaction item to a user of an end user electronic device; Paragraph 0248, The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention; Paragraph 0249, A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire): communicate training data to a computing device to in situ train a user on how to identify potential security incidents (Paragraph 0054, Training items 124 may present an end user associated with user system 104, 106 with training data associated with user and/or network-based security scenarios. Training items 124 may include audio/video data, tests, quizzes, questionnaires, interactive applications, scenario-based challenge/response applications, and/or the like to obtain feedback from an end user using user system 104, 106 regarding knowledge and/or proficiency associate with user and/or network-based security issues. Feedback and/or responses to security items 112 and/or training items 124 may be received and stored as security item interaction data 132 and/or training item interaction data 134, respectively; Paragraph 0056, Input/output module 140 may include for example, I/O devices, which may be configured to provide input and/or output to user system); track performance of the user based on the training data (Paragraph 0054, Security item interaction data 132 and/or training item interaction data 134 may be used to generate an initial risk score for an end user, a group of end users, and/or an organization. Security item interaction data 132 and/or training item interaction data 134 may be used to update a risk score for an end user, a group of end users, and/or an organization. Security item interaction data 132 and/or training item interaction data 134 may be used to determine a sophistication level associated with subsequently transmitted security items 112 and/or training items 124 as well as the frequency of future occurrence for each end user based on the end user's score; Examiner interprets the “risk score” as the “performance of the user”); and store a profile of the user in a database, the profile indicating a level of performance of the user (Paragraph 0115, As discussed above, FIG. 9 illustrates an employee profile 118; Paragraph 0116, The “Score” column 921 may include entries 938 identifying the employee's risk score; Paragraph 0211, Once a risk score has been determined for a given recipient user, a risk score may be saved and/or stored within the user profile 118 associated with the recipient user. Risk scores may be stored in other profiles as well. If a previous risk score is already associated with the user, this previous score may be updated with the new score; Paragraph 0136, This user risk score may provide an organization with a quantified indication as to the level of risk a given user exposes the organization to with respect to the security of its computing networks). Regarding claims 2, 9, and 16, which are dependent of claims 1, 8, and 15, Hawthorn et al. discloses all the limitations in claims 1, 8, and 15. Hawthorn et al. further discloses receiving, from a computing device, an indication of a potential security incident; identifying a user of the computing device; retrieving the profile of the user from the database; and performing action on the potential security incident based on the profile of the user (Paragraph 0213, In another embodiment, user risk scores may be used within technical security controls to determine how a user is treated at the technical level (e.g. firewall, proxy, or email restrictions, more detailed logging over user's activities, etc.). Users with higher risk scores may have more restrictions placed on them within the computing network than users with lower risk scores. As a user positively interacts (performs actions that do not compromise the security of the computing network) with security items 112 and/or training items 124, a risk score may reduce and less network restrictions may be imposed on the user; Examiner notes that the system can control which information is provided to the user based on the updated risk score, wherein the updated score provides an indication of a potential security incident). Regarding claims 3 and 10, which are dependent of claims 2 and 9, Hawthorn et al. discloses all the limitations in claims 2 and 9. Hawthorn et al. further discloses wherein the action include at least one of automatic quarantine, placing the potential security incident in a queue, degrading software associated with the potential security incident, halting communication with a server associated with the potential security incident, or blacklisting an IP address associated with the potential security incident (Paragraph 0213, In another embodiment, user risk scores may be used within technical security controls to determine how a user is treated at the technical level (e.g. firewall, proxy, or email restrictions, more detailed logging over user's activities, etc.). Users with higher risk scores may have more restrictions placed on them within the computing network than users with lower risk scores. As a user positively interacts (performs actions that do not compromise the security of the computing network) with security items 112 and/or training items 124, a risk score may reduce and less network restrictions may be imposed on the user; It can be noted that the claim language is written in alternative form. The limitation taught by Hawthorn et al. is based on “halting communication with a server associated with the potential security incident" since it is known by an ordinary skill in the art that a firewall is used to halt/block potential security incidents). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 4-7 and 11-14 are rejected under 35 U.S.C. 103 as being unpatentable over Hawthorn et al. (US 2017/0244746 A1), in view of Kras (US 2022/0345485 A1). Regarding claims 4 and 11, which are dependent of claims 3 and 10, Hawthorn et al. discloses all the limitations in claims 3 and 10. Although Hawthorn et al. discloses assessing how a user identifies security incidents, wherein the assessment is used to generate a user profile and determine restrictions of the user based on the profile (Paragraph 0213, In another embodiment, user risk scores may be used within technical security controls to determine how a user is treated at the technical level), Hawthorn et al. does not specifically disclose wherein the profile of the user is used to place the security incident in a position of the queue (e.g., prioritize the incident based on the user who reported the incident). However, Kras discloses wherein the security incident is placed in a position of the queue based on the profile of the user (Paragraph 0005, Currently, some security awareness systems calculate and provide a phish identification score for a user based on a percentage of reported suspected phishing messages that turn out to be actual phishing messages with malicious intent. Based on the user's phish identification score, the threat detection platform and/or the security authority may prioritize analysis of messages reported by the user when there are plethora of reports on a daily basis; Paragraph 0028, In some implementations, the initial reporter's impact score may also be a function of the historic accuracy of the reporting of malicious messages by the initial reporter, i.e., the percentage of messages over time reported by the initial reporter as malicious messages that the threat detection platform confirmed are indeed malicious messages). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the method used for assessing a user based on how the user identifies security incidents (e.g., based on user interactions with security items and/or training items) of the invention of Hawthorn et al. to further specify wherein the security incident is placed in a position of the queue based on the user’s assessment (e.g., performance/score) of the invention of Kras because doing so would allow the method to prioritize analysis of messages reported by the user when there are plethora of reports on a daily basis based at least on a score that reflects accuracy of the reported messages (see Kras, Paragraphs 0005 & 0028). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claims 5 and 12, which are dependent of claims 3 and 10, Hawthorn et al. discloses all the limitations in claims 3 and 10. Although Hawthorn et al. discloses assessing how a user identifies security incidents, wherein the assessment is used to generate a user profile and determine restrictions of the user based on the profile (Paragraph 0213, In another embodiment, user risk scores may be used within technical security controls to determine how a user is treated at the technical level), Hawthorn et al. does not specifically disclose wherein the profile of the user is used to place the security incident in a position of the queue (e.g., prioritize the incident based on the user who reported the incident). However, Kras discloses wherein the security incident is placed at a top position of the queue when the profile of the user indicates high performance of the user (Paragraph 0005, Currently, some security awareness systems calculate and provide a phish identification score for a user based on a percentage of reported suspected phishing messages that turn out to be actual phishing messages with malicious intent. Based on the user's phish identification score, the threat detection platform and/or the security authority may prioritize analysis of messages reported by the user when there are plethora of reports on a daily basis; Paragraph 0028, In some implementations, the initial reporter's impact score may also be a function of the historic accuracy of the reporting of malicious messages by the initial reporter, i.e., the percentage of messages over time reported by the initial reporter as malicious messages that the threat detection platform confirmed are indeed malicious messages). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the method used for assessing a user based on how the user identifies security incidents (e.g., based on user interactions with security items and/or training items) of the invention of Hawthorn et al. to further specify wherein the security incident is placed in a position of the queue based on the user’s assessment (e.g., performance/score) of the invention of Kras because doing so would allow the method to prioritize analysis of messages reported by the user when there are plethora of reports on a daily basis based at least on a score that reflects accuracy of the reported messages (see Kras, Paragraphs 0005 & 0028). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claims 6 and 13, which are dependent of claims 3 and 10, Hawthorn et al. discloses all the limitations in claims 3 and 10. Although Hawthorn et al. discloses assessing how a user identifies security incidents, wherein the assessment is used to generate a user profile and determine restrictions of the user based on the profile (Paragraph 0213, In another embodiment, user risk scores may be used within technical security controls to determine how a user is treated at the technical level), Hawthorn et al. does not specifically disclose wherein the profile of the user is used to place the security incident in a position of the queue (e.g., prioritize the incident based on the user who reported the incident). However, Kras discloses wherein the security incident is placed at a top position of the queue when the profile of the user indicates low performance of the user (Paragraph 0005, Currently, some security awareness systems calculate and provide a phish identification score for a user based on a percentage of reported suspected phishing messages that turn out to be actual phishing messages with malicious intent. Based on the user's phish identification score, the threat detection platform and/or the security authority may prioritize analysis of messages reported by the user when there are plethora of reports on a daily basis; Paragraph 0028, In some implementations, the initial reporter's impact score may also be a function of the historic accuracy of the reporting of malicious messages by the initial reporter, i.e., the percentage of messages over time reported by the initial reporter as malicious messages that the threat detection platform confirmed are indeed malicious messages). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the method used for assessing a user based on how the user identifies security incidents (e.g., based on user interactions with security items and/or training items) of the invention of Hawthorn et al. to further specify wherein the security incident is placed in a position of the queue based on the user’s assessment (e.g., performance/score) of the invention of Kras because doing so would allow the method to prioritize analysis of messages reported by the user when there are plethora of reports on a daily basis based at least on a score that reflects accuracy of the reported messages (see Kras, Paragraphs 0005 & 0028). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claims 7 and 14, which are dependent of claims 2 and 9, Hawthorn et al. discloses all the limitations in claims 2 and 9. Although Hawthorn et al. discloses receiving, from a computing device, an indication of a potential security incident (Paragraph 0213, risk score based on how a user interacts with security items and/or training items), Hawthorn et al. does not specifically disclose receiving, from another computing device, another indication of the potential security incident. However, Kras discloses receiving, from another computing device, another indication of the potential security incident; and taking immediate action on the potential security incident (Paragraph 0179, In an example, threat detection platform 208 may process two messages reported at the same time by two different users in an order that is based on the impact scores of the two users. Threat detection platform 208 may triage the reported messages and perform analysis of reported messages within the threat detection platform, and may examine individual portions of the reported messages in a sandboxed environment to detect security threats; Paragraph 0182, Using the described solution, the initial reporter is given credit (through recognition or gamification) for being the first user to detect a malicious message that is damaging to the security of an organization or difficult for other users to detect, providing motivation for users to improve their security awareness. In some examples, other reporters who are not the first to report are also given credit for reporting messages that have not yet been defanged and sent to other users. The organization benefits from the methods and system described as it enables the organization to prioritize the triage and analysis of reported messages according to the impact scores of the users that reported them, allowing the organization to address reported messages that are most likely to be the most dangerous or impactful to the organization ahead of reported messages that are less likely to be dangerous or impactful, lowering the overall security risk of the organization). It would have been obvious to one ordinary skill in the art before the effective filing date to modify the method used for receiving an indication of a potential security incident (e.g., based on user interactions with security items and/or training items) of the invention of Hawthorn et al. to further specify receiving another indication of the potential security incident (e.g., from another user) of the invention of Kras because doing so would allow the method to prioritize the triage and analysis of reported messages according to the impact scores of the users that reported them, allowing the organization to address reported messages that are most likely to be the most dangerous or impactful to the organization ahead of reported messages that are less likely to be dangerous or impactful (see Kras, Paragraph 0182). Further, the claimed invention is merely a combination of old elements, and in combination each element would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Kras et al. (US 2019/0005428 A1) – discloses performing the simulated phishing attack can help expose a lack of vigilance and/or know-how in a user or set of users of a device. In a security awareness program, the information learned from the simulated phishing attack can be used to provide targeted training or remedial actions to minimize risk associated with such attacks. For example, user know-how can be improved by providing targeted, real-time training to the user at the time of failing a test provided by the simulated phishing attack (see at least Paragraph 0096). Fritzson (WO 2012/068255 A2) – discloses focused phishing awareness training wherein "teachable moments" are exploited so as to provide focused training for users that have demonstrated susceptibility to phishing. The systems and methods also adapt to evolving threats by including live exercises that are performed regularly with escalated complexity based on the level of user awareness demonstrated in previously-completed exercises (see at least Paragraph 0021). Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARJORIE PUJOLS-CRUZ whose telephone number is (571)272-4668. The examiner can normally be reached Mon-Thru 7:30 AM - 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patricia H Munson can be reached at (571)270-5396. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /M.P./Examiner, Art Unit 3624 /PATRICIA H MUNSON/Supervisory Patent Examiner, Art Unit 3624
Read full office action

Prosecution Timeline

Aug 01, 2024
Application Filed
Oct 01, 2025
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12106240
SYSTEMS AND METHODS FOR ANALYZING USER PROJECTS
2y 5m to grant Granted Oct 01, 2024
Patent 12014298
AUTOMATICALLY SCHEDULING AND ROUTE PLANNING FOR SERVICE PROVIDERS
2y 5m to grant Granted Jun 18, 2024
Patent 11966927
Multi-Task Deep Learning of Client Demand
2y 5m to grant Granted Apr 23, 2024
Patent 11941651
LCP Pricing Tool
2y 5m to grant Granted Mar 26, 2024
Patent 11847602
SYSTEM AND METHOD FOR DETERMINING AND UTILIZING REPEATED CONVERSATIONS IN CONTACT CENTER QUALITY PROCESSES
2y 5m to grant Granted Dec 19, 2023
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
18%
Grant Probability
46%
With Interview (+27.9%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 136 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month