Prosecution Insights
Last updated: April 19, 2026
Application No. 17/324,742

SYSTEMS AND METHODS FOR USE OF EMPLOYEE MESSAGE EXCHANGES FOR A SIMULATED PHISHING CAMPAIGN

Final Rejection §101§103
Filed
May 19, 2021
Examiner
SANTIAGO-MERCED, FRANCIS Z
Art Unit
3625
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Knowbe4 Inc.
OA Round
10 (Final)
29%
Grant Probability
At Risk
11-12
OA Rounds
3y 7m
To Grant
70%
With Interview

Examiner Intelligence

Grants only 29% of cases
29%
Career Allow Rate
37 granted / 126 resolved
-22.6% vs TC avg
Strong +41% interview lift
Without
With
+41.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
49 currently pending
Career history
175
Total Applications
across all art units

Statute-Specific Performance

§101
46.3%
+6.3% vs TC avg
§103
35.0%
-5.0% vs TC avg
§102
10.9%
-29.1% vs TC avg
§112
6.9%
-33.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 126 resolved cases

Office Action

§101 §103
DETAILED ACTION This is a Final Office Action in response to the amendment filed 01/02/2026. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1-22 are currently pending in the application and have been examined. Response to Amendment The amendment filed 01/02/2026 has been entered. Response to Arguments Claim Rejections 35 U.S.C. § 101: Applicant submits on page 8 of the remarks that the claims are patent eligible. Examiner respectfully disagrees and notes that according to the 2019 Revised Patent Subject Matter Eligibility Guidance (PEG), if a claim limitation covers observations or evaluations then it falls within the “mental process” grouping of abstract ideas. Under the 2019 PEG, the “mental processes” grouping is defined as concepts performed in the human mind, and examples of mental processes include observations, evaluations, judgments, and opinions. Per the October 2019 Updated Guidance examples of claims that recite mental processes include: a claim directed to “collecting information, analyzing it, and displaying certain results of the collection and analysis” where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind. Claims can recite a mental process even if they are claimed as being performed on a computer. As the Federal Circuit has explained, "Courts have examined claims that required the use of a computer and still found that the underlying, patent-ineligible invention could be performed via pen and paper or in a person’s mind." Versata Dev. Group v. SAP Am., Inc., 793 F.3d 1306, 1335, 115 USPQ2d 1681, 1702 (Fed. Cir. 2015). See also Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307, 1318, 120 USPQ2d 1353, 1360 (Fed. Cir. 2016) (‘‘[W]ith the exception of generic computer-implemented steps, there is nothing in the claims themselves that foreclose them from being performed by a human, mentally or with pen and paper.’’); Mortgage Grader, Inc. v. First Choice Loan Servs. Inc., 811 F.3d 1314, 1324, 117 USPQ2d 1693, 1699 (Fed. Cir. 2016). See MPEP 2106.04(a)(2). Further, Applicant submits that the steps recited in the claims reflect a particular improvement to a technical field. However, per the Revised October 2019 guidance: in order to determine if an invention improves the functioning of a computer or other technology (i.e. technical field) and integrate the judicial exception into a practical application, while the courts have not provided an explicit test for this consideration, MPEP 2106.04(a) and 2106.05(a) provide guidance, first the specification should be evaluated to determine if the disclosure provides sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement; second, if the specification sets forth an improvement in technology, the claim must be evaluated to ensure that the claim itself reflects the disclosed improvement. When looking at the specification, in view of the claims the disclosure supports that the provided improvement is not to the technology but to the abstract idea. Applicant submits on page 12 of the remarks that Even if some high-level "evaluation" is implicated, the claim integrates any such exception into a practical application by requiring computer interactions (API scanning, message generator outputs, click-responsive signal handling) and model-based contextual selection using determines response rates-none of which is practically performable mentally and each of which is functionally tied to improved system operation in the security simulation domain. Examiner respectfully disagrees and notes that the present claims do not integrate the judicial exception into a practical application in a matter that imposes meaningful limit to the judicial exception. The claims as presented are merely linking the use of the judicial exception to a computer system. The additional elements recited in the claims do not provide improvement to the computer technology and do not provide a meaningful link of the abstract idea to a practical application. Applicant submits that the Claim also satisfies Step 2B and that the ordered combination of API-based scanning for format/structure/status, ML model configuration/use to output contextual information, click-responsive capture, determination of user-specific response rates to both contextualized simulated and real phishing communications, and using those rates with the captured response to select contextual information for iterative generation/transmission is not well-understood, routine, or conventional. Further, Applicant submits “The Office Action cites generic computing as "routine," but does not provide evidence establishing that this particular integration of telemetry and response-rate computation and ML-driven contextual selection for adaptive simulated phishing generation was conventional. Under the Memo's reminder that §101 rejections must be supported by a preponderance of evidence, Step 2B further supports eligibility.” Examiner respectfully disagrees and notes that these limitations do not impose meaningful limits on the judicial exception as they are directed to receiving or transmitting data over a network and the courts have recognized these computer functions as well-understood, routine, and conventional activity, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network). See MPEP 2106.05(d)(II)(i). Claim Rejections 35 U.S.C. § 103: Applicant submits that the combination of references fails to teach the amended limitations of the claims. Examiner respectfully disagrees and notes that the amended features are disclosed by Higbee and Covell as detailed in the instant office action. Applicant submits that Covell does not teach determining a response rate of the user to simulated phishing communications that include contextual information, nor does it compare this to the user's response rate to real phishing communications, further Applicant submits that …Covell does not teach using the response rate to simulated phishing communications that include contextual information and to real phishing communications as input to the machine learning model to determine a selected contextual information that is more effective for engaging the user… Examiner respectfully disagrees, Covell discloses time and delays for responses (i.e. response rate) See at least Covell [0018]. Further, Covell discloses the attack personalization model uses a neural network to train and predict effective attack communications and content. See at least [0031-0032]. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1-22 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-patentable subject matter. The claims are directed to an abstract idea without significantly more. With respect to claims 1-22, the independent claims (claims 1, 12) are directed, in part, to a method and a system for generating a simulated phishing communication. Step 1 – First pursuant to step 1 in the January 2019 Guidance, claims 1-12 are directed to a method comprising a series of steps which falls under the statutory category of a process and claims 12-22 are directed to a system which falls under the category of a machine. However, these claim elements are considered to be abstract ideas because they are directed to a mental process which includes observations or evaluations. As per Step 2A - Prong 1 of the subject matter eligibility analysis, the claims are directed, in part, to generating a simulated phishing communication, the method comprising: scan and analyze, by one or more processors using an application programming interface (API), messages one of sent to or exchanged with other users in a message store of one or more users; identifying, by the one or more processors from content of messages in the message store, one or more message characteristics of one or more messages of a user of the one or more users, the one or more message characteristics comprising one of a message format, a message structure and a message status; establishing, by the one or more processors, a machine learning model configured to receive input comprising one or more message characteristics of the user and output contextual information to be provided to a message generator to tailor one or more simulated phishing communications of increased relevance to the user; providing, by the one or more processors, the one or more message characteristics of the user from messages that the user one of sent to or exchanged with other users as an input to the machine learning model; determining, by the one or more processors using an output of the machine learning model generated responsive to the input comprising the one or more message characteristics of a message format, a message structure and a message status of the user from messages that the user one of sent or exchanged with other users, contextual information from the one or more message characteristics of one or more messages of the user to generate a simulated phishing communication relevant to the user; generating, by the message generator of the one or more processors using the output of contextual information from the machine learning model, the simulated phishing communication based at least on the contextual information relevant to the user to increase the likelihood that the user will interact with an element of the simulated phishing communication; communicating, by the one or more processors, the simulated phishing communication to a device of the user, the user clicking on the element of the simulated phishing communication; receiving, by the one or more processors responsive to the user clicking on the element of the simulated phishing communication, a response the user interacted with the simulated phishing communication that included the contextual information; determining, by the one or more processors, a response rate of the user to simulated phishing communications that include the contextual information and to real phishing communications and; determining, by the machine learning model based at least on the input to the machine learning model of the response that the user interacted with the simulated phishing communication that included the contextual information and the response rate of user to simulated phishing communications that include the contextual information and to real phishing communications, a selected contextual information, as output from the machine learning model, that was more effective for engaging the user to interact with one or more elements of the simulated phishing communication; and generating, by the message generator of the one or more processors based at least on the selected contextual information determined from the machine learning model responsive to the user interacting with the simulated phishing communication, a subsequent simulated phishing communication to be communicated to the device of the user, the subsequent simulated phishing communication comprising the selected contextual information having increased relevance to the user than the contextual information of the simulated phishing communication and a higher likelihood that the user will interact with the subsequent simulated phishing communication than the simulated phishing communication; and communicating, by the one or more processors, the subsequent simulated phishing communication to the device of the user to determine whether or not the user will interact with the subsequent simulated phishing communication. If a claim limitation, under its broadest reasonable interpretation covers an observation or evaluation, then it falls under the “mental process” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. As per Step 2A - Prong 2 of the subject matter eligibility analysis, this judicial exception is not integrated into a practical application. In particular, the claim recites additional elements – “one or more processors”; “a message store”; “a machine learning model”; “a system”; “memory”. These additional elements in both steps are recited at a high-level of generality (i.e., as a generic device performing a generic computer function of receiving and storing data) such that these elements amount no more than mere instructions to apply the exception using a generic computer component. Examiner looks to Applicant’s specification in at least figures 1A-1C and related text and [0051-0052] to understand that the invention may be implemented in a generic environment that “Central processing unit 121 is any logic circuity that responds to and processes instructions fetched from main memory unit 122. In many embodiments, central processing unit 121 is provided by a microprocessor unit, e.g.: those manufactured by Intel Corporation of Mountain View, California; those manufactured by Motorola Corporation of Schaumburg, Illinois; the ARM processor and TEGRA system on a chip (SoC) manufactured by Nvidia of Santa Clara, California; the POWER7 processor, those manufactured by International Business Machines of White Plains, New York; or those manufactured by Advanced Micro Devices of Sunnyvale, California. Computing device 100 may be based on any of these processors, or any other processor capable of operating as described herein. Central processing unit 121 may utilize instruction level parallelism, thread level parallelism, different levels of cache, and multi-core processors. A multi- core processor may include two or more processing units on a single computing component. Examples of multi-core processors include the AMD PHENOM IIX2, INTEL CORE i5 and INTEL CORE i7. Main memory unit 122 may include one or more memory chips capable of storing data and allowing any storage location to be directly accessed by microprocessor 121. Main memory unit 122 may be volatile and faster than storage 128 memory. Main memory units 122 may be Dynamic Random-Access Memory (DRAM) or any variants, including static Random-Access Memory (SRAM), Burst SRAM or SynchBurst SRAM (BSRAM), Fast Page Mode DRAM (FPM DRAM), Enhanced DRAM (EDRAM), Extended Data Output RAM (EDO RAM), Extended Data Output DRAM (EDO DRAM), Burst Extended Data Output DRAM (BEDO DRAM), Single Data Rate Synchronous DRAM (SDR SDRAM), Double Data Rate SDRAM (DDR SDRAM), Direct Rambus DRAM (DRDRAM), or Extreme Data Rate DRAM (XDR DRAM). In some embodiments, main memory 122 or storage 128 may be non-volatile; e.g., non-volatile read access memory (NVRAM), flash memory non-volatile static RAM (nvSRAM), Ferroelectric RAM (FeRAM), Magnetoresistive RAM (MRAM), Phase-change memory (PRAM), conductive- bridging RAM (CBRAM), Silicon-Oxide-Nitride-Oxide-Silicon (SONOS), Resistive RAM (RRAM), Racetrack, Nano-RAM (NRAM), or Millipede memory. Main memory 122 may be based on any of the above described memory chips, or any other available memory chips capable of operating as described herein.” Accordingly, these additional elements do not integrate the abstract idea into a practical application because they are mere instructions to implement the abstract idea on a computer. The use of machine learning, as recited in the claims would not account for additional elements that integrate the judicial exception (e.g. abstract idea) into a practical application because it is being used as mere instruction s to implement the abstract idea on a computer (See PEG 2019 and MPEP 2106.05). As per Step 2B of the subject matter eligibility analysis, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The additional elements are mere instructions to apply the abstract idea on a computer. When considered individually, these claim elements only contribute generic recitations of technical elements to the claims. It is readily apparent, for example, that the claim is not directed to any specific improvements of these elements and the invention is not directed to a technical improvement. When the claims are considered individually and as a whole, the additional elements noted above, appear to merely apply the abstract concept to a technical environment in a very general sense – i.e. a generic computer receives information from another generic computer, processes the information and then sends information back. In addition, when taken as an ordered combination, the ordered combination adds nothing that is not already present as when the elements are taken individually. Their collective functions merely provide generic computer implementation. Therefore, when viewed as a whole, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a practical application of the abstract idea or that amount to significantly more than the abstract idea itself. The most significant elements of the claims, that is the elements that really outline the inventive elements of the claims, are set forth in the elements identified as an abstract idea. The fact that the generic computing devices are facilitating the abstract concept is not enough to confer statutory subject matter eligibility. Lastly, Next, when the “machine learning” is evaluated as an additional element, this feature is recited at a high level of generality and encompasses well-understood, routine, and conventional prior art activity. See, e.g., Balsiger et al., US 2012/0054642, noting in paragraph [0077] that “Machine learning is well known to those skilled in the art.” See also, Djordjevic et al. US 2013/0018651, noting in paragraph [0019] that “As known in the art, a generative model can be used in machine learning to model observed data directly.” See also, Bauer et al., US 2017/0147941, noting at paragraph [0002] that “Problems of understanding the behavior or decisions made by machine learning models have been recognized in the conventional art and various techniques have been developed to provide solutions.” Accordingly, the use of machine learning to generate a learning model does not add significantly more to the claim. Dependent claims 2-11, 13-22 further refine the abstract idea. These claims do not provide a meaningful linking to the judicial exception. Rather, these claims offer further descriptive limitations of elements found in the independent claims and addressed above – such as by describing the nature and content of the data that is received/sent. While these descriptive elements may provide further helpful context for the claimed invention these elements do not serve to confer subject matter eligibility to the invention since their individual and combined significance is still not significantly more than the abstract concepts at the core of the claimed invention. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-22 is/are rejected under 35 U.S.C. 103 as being unpatentable over US Pub. No. 2021/0281596 (hereinafter; Covell) in view of US Pub. No. 2021/0021612 (hereinafter; Higbee). Regarding claims 1/12, Covell discloses: A method; a system for using a message store for generating a simulated phishing communication, the method comprising: the message store storing the messages that the one or more users one of sent or exchanged with other users; (Covell [0024] in which accessing user history to retrieve context, content, and communication style implies accessing stored messages associated with at least one user.) identifying, by the one or more processors from content of messages in the message store, one or more message characteristics of one or more messages of a user of the one or more users; (Covell [0024] context, content, and communication style retrieved from user message history cited from represent message characteristics identified from the accessed messages.) establishing, by the one or more processors, a machine learning model configured to receive input comprising one or more message characteristics of the user and output contextual information to be provided to a message generator to tailor one or more simulated phishing communications of increased relevance to the user; (Covell [0032] In some embodiments, the attack personalization model uses a neural network to train and predict effective attack communications and content. The model component 150 may use supervised learning method (e.g., a supervised learning loop using machine learning techniques). In some instances, the supervised learning methods include neural networking, convolutional neural networking, or other suitable machine learning methodology to train the attack personalization model.) providing, by the one or more processors, the one or more message characteristics of the user from messages that the user one of sent to or exchanged with other users as an input to the machine learning model; (Covell [0031] discloses passing message characteristics as input for the model.) determining, by the one or more processors using an output of the machine learning model generated (Covell [0031-0032] combination of attack personalization model and model component 150) configured to output contextual information from the one or more message characteristics of one or more messages of the user to generate a simulated phishing communication relevant to the user; (Covell [0025] For example, the message component 120 may analyze aspects of communications of the user and a specified correspondent to identify a colloquialisms or informalities used by the correspondent (i.e. exchanged with other user); adds elements, colloquialisms, content, and other information to the simulated attack communication; [0034-0037] for generating personalized simulated attack communication.) generating, by the message generator of the one or more processors using the output of contextual information from the machine learning model, (Covell [0032] the attack personalization model uses a neural network to train and predict effective attack communications and content. The model component 150 may use supervised learning method (e.g., a supervised learning loop using machine learning techniques). In some instances, the supervised learning methods include neural networking, convolutional neural networking, or other suitable machine learning methodology to train the attack personalization model.) the simulated phishing communication based at least on the contextual information (Covell at least step 220 of Fig. 2; [0024]- [0025]); relevant to the user to increase the likelihood that the user will interact with an element of the simulated phishing communication; (Covell [0021] In some embodiments, communication trends include indications of trusted types for the user. Trusted types may be indicated by subjects, attachment types, correspondent characteristics, time characteristics, or other similar characteristics of communications most likely to be interacted with by the user, and with which the user takes less time to interact.) communicating, by the one or more processors, the simulated phishing communication to a device of the user, (Covell step 230 of Fig. 2; [0026] in which communicating via email or SMS implies communication to the user’s mobile device; see also [0051] which includes transmission of communications to at least PDA/mobile telephone 54A or computer(s) 54B/C).) a response indicating the user interacted with the simulated phishing communication that included the contextual information; (Covell at least [0027-0029], in which a user response to the simulated attack communication is monitored; step 240 of Fig. 2) determining, by the one or more processors, a response ,of the user to simulated phishing communications that include the contextual information and to real phishing communications and; (Covell [0018] discloses time and delays for responses (i.e. response rate.) determining, by the machine learning model based at least on the input to the machine learning model of the response that the user interacted with the simulated phishing communication that included the contextual information, and the response rate of user to simulated phishing communications that include the contextual information and to real phishing communications, a selected contextual information, as output, from the machine learning model that was more effective for engaging the user to interact with one or more elements of the simulated phishing communication; and (Covell at least [0031-0032], in which the selected contextual information corresponds to at least the “content and characteristics” of a successful simulated attack; the attack personalization model uses a neural network to train and predict effective attack communications and content.) and generating, by the message generator of the one or more processors based at least on the selected contextual information determined from the machine learning model responsive to the user interacting with the simulated phishing communication, a subsequent simulated phishing communication to be communicated to the device of the user, (Covell at least step 250 of Fig. 2; [0031-0032] the attack personalization model uses a neural network to train and predict effective attack communications and content. The model component 150 may use supervised learning method (e.g., a supervised learning loop using machine learning techniques). In some instances, the supervised learning methods include neural networking, convolutional neural networking, or other suitable machine learning methodology to train the attack personalization model; [0039] in which “[a] second simulated attack communication may be generated as a follow-up attack based on users’ responses or failures in user responses to the initial simulated attack communication”.) the subsequent simulated phishing communication comprising the selected contextual information having increased relevance to the user than the contextual information of the simulated phishing communication and a higher likelihood that the user will interact with the subsequent simulated phishing communication than the simulated phishing communication and communicating, by the one or more processors, the subsequent simulated phishing communication to the device of the user to determine whether or not the user will interact with the subsequent simulated phishing communication. (Covell [0021] In some embodiments, communication trends include indications of trusted types for the user. Trusted types may be indicated by subjects, attachment types, correspondent characteristics, time characteristics, or other similar characteristics of communications most likely to be interacted with by the user, and with which the user takes less time to interact.) Although Covell discloses systems and methods for simulating phishing communications, Covell does not specifically disclose type of message characteristic, an application programming interface or the user clicking on the message. However, Higbee discloses the following limitations: scan and analyze, by one or more processors, using an application programming interface (API), messages one of sent to or exchanged with other users, in a message store of one or more users, (Higbee [0062] discloses The portal can offer an interface for executing queries over the messages stored in the shadow message store. In some embodiments, the commands can be issued through an API to the shadow message store; [0078] discloses In some embodiments interfacing with Microsoft Exchange, the Exchange server may expose and application programming interface (API) which allows an external service to issue commands to insert messages.) contextual information responsive to input comprising one or more message characteristics of a message format, a message structure and a message status of the user from messages that the user one of sent or exchanged with other users, (See Higbee [0054] message format; [0169] message infrastructure; [0098] message status.) the one or more message characteristics comprising one of a message format, a message structure and a message status; (See Higbee [0054] message format; [0169] message infrastructure; [0098] message status.) the user clicking on the element of the simulated phishing communication; (Higbee [0067]; [0147] disclose a user clicking on a simulated phishing email.) receiving, by the one or more processors responsive to the user clicking on the element of the simulated phishing communication, (Higbee [0287] discloses reporting when a user clicked on the message.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the system for generating personalized security testing simulations of Covell with the message platform of Higbee in order to remediate suspicious threats and messages (Higbee abstract) because the references are analogous since they both fall within Applicant's field of endeavor and are reasonably pertinent to the problem with which Applicant is concerned. Regarding claims 2/13, Covell discloses: The method of claim 1; the system of claim 12, further comprising accessing, by the one or more processors, a subset of messages in the message store of the one or more users that are within a predetermined time period. (Covell [0020] the communication engagement statistics comprise metadata indicating preferences, communication times, communication durations; [0024] The message component 120 may also access user history (i.e. message store) and user profile information to identify and retrieve the context, content, and communication style; [0039] discloses the communications component selects and transmits communications.) Regarding claims 3/14, Covell discloses: The method of claim 1; the system of claim 12, wherein the one or more messages are forwarded to the message store from one of a second message store or a messaging application of the one or more users. (Covell [0039] discloses In some embodiments, the communications component 130 cooperates with one or more of the message component 120, the response component 140, and the model component 150 to select and transmit a second simulated attack communication at a second transmission time. the second simulated attack communication may be generated as a follow-up attack based on users' responses or failures in user responses to the initial simulated attack communication.) Although Covell discloses systems and methods for simulating phishing communications, Covell does not specifically disclose scanning the message. However, Higbee discloses the following limitations: Regarding claims 4/15, Although Covell discloses systems and methods for simulating phishing communications, Covell does not specifically disclose scanning the message. However, Higbee discloses the following limitations: The method of claim 1; the system of claim 12, further comprises scanning, by the one or more processors, content of the one or more messages. (Covell [0151-0152] disclose scanning messages.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the system for generating personalized security testing simulations of Covell with the message platform of Higbee in order to remediate suspicious threats and messages (Higbee abstract) because the references are analogous since they both fall within Applicant's field of endeavor and are reasonably pertinent to the problem with which Applicant is concerned. Regarding claims 5/16, Covell discloses: The method of claim 1; the system of claim 12, wherein the one or more message characteristics comprises one or more of the following: one or more keywords, one or more links and one or more phone numbers. (Covell [0019] discloses the identification component 110 may use topic modeling techniques to analyze the content and context of communications to determine … common keywords or phrases, and prioritized topics associated with communications of the user.) Regarding claims 6/17, Covell discloses: The method of claim 1; the system of claim 12, wherein the one or more message characteristics comprises one or more of the following: a date and a time of transmission or receipt, a frequency of the one or more messages, message participants and a message status. (Covell [0020] discloses the communication engagement statistics comprise metadata indicating preferences… communication times.) Regarding claims 7, Covell discloses: The method of claim 1, wherein the one or more message characteristics comprises one or more of the following: an image or a logo, one or more attachments and one or more software tools used by the one or more users. (Covell [0021] discloses identifying context trends including attachment types.) Regarding claim 8, Covell discloses: The method of claim 1, further comprises determining the selected contextual information that was more effective for engaging the user to interact with one or more elements of the simulated phishing communication based at least on a response rate of the user to both real phishing messages and simulated phishing communications that include this contextual information. (Covell [0032] In some embodiments, the attack personalization model uses a neural network to train and predict effective attack communications and content. The model component 150 may use supervised learning method (e.g., a supervised learning loop using machine learning techniques). In some instances, the supervised learning methods include neural networking, convolutional neural networking, or other suitable machine learning methodology to train the attack personalization model.) Regarding claims 9/20, Covell discloses: The method of claim 1; the system of claim 12, further comprises generating, by the one or more processors, the simulated phishing communication based at least on the contextual information identifying one or more dates or times relevant to the user. (Covell [0020] discloses the communication engagement statistics comprise metadata indicating preferences… communication times.) Regarding claims 10/21, Covell discloses: The method of claim 1; the system of claim 12, further comprises generating, by the one or more processors, the simulated phishing communication based at least on the contextual information identifying one or more message types relevant to the user. (Covell [0025] discloses In some embodiments, the message component 120 uses the context, content, and communication style as input into the attack personalization model.) Regarding claims 11/22, Covell discloses: The method of claim 1; the system of claim 12, further comprising receiving, by the one or more processors, the response from an email client plug-in of the device of the user. (Covell [0026] discloses The communications component 130 may transmit the simulated attack communication via email.) Regarding claims 18, Covell discloses: The system of claim 12, wherein the one or more message characteristics comprises one or more of the following: a message structure, a message format, an image or a logo, one or more attachments and one or more software tools used by the one or more users. (Covell [0021] discloses identifying context trends including attachment types.) Regarding claim 19, Although Covell discloses systems and methods for simulating phishing communications, Covell does not specifically disclose a specific messaging application. However, Higbee discloses the following limitations: The system of claim 12, wherein the one or more message characteristics comprises a message format generated by a specific messaging application. (Higbee [0062] discloses The portal can offer an interface for executing queries over the messages stored in the shadow message store. In some embodiments, the commands can be issued through an API to the shadow message store; [0078] discloses In some embodiments interfacing with Microsoft Exchange, the Exchange server may expose and application programming interface (API) which allows an external service to issue commands to insert messages.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the system for generating personalized security testing simulations of Covell with the message platform of Higbee in order to remediate suspicious threats and messages (Higbee abstract) because the references are analogous since they both fall within Applicant's field of endeavor and are reasonably pertinent to the problem with which Applicant is concerned. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FRANCIS Z SANTIAGO-MERCED whose telephone number is (571)270-5562. The examiner can normally be reached M-F 7am-4:30pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, BRIAN EPSTEIN can be reached at 571-270-5389. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /FRANCIS Z. SANTIAGO MERCED/Examiner, Art Unit 3625
Read full office action

Prosecution Timeline

May 19, 2021
Application Filed
Jul 23, 2021
Non-Final Rejection — §101, §103
Oct 27, 2021
Response Filed
Nov 03, 2021
Final Rejection — §101, §103
Jan 14, 2022
Response after Non-Final Action
Feb 04, 2022
Response after Non-Final Action
Mar 03, 2022
Applicant Interview (Telephonic)
Mar 04, 2022
Examiner Interview Summary
Mar 08, 2022
Request for Continued Examination
Mar 10, 2022
Response after Non-Final Action
Jun 03, 2022
Non-Final Rejection — §101, §103
Sep 16, 2022
Response Filed
Oct 07, 2022
Final Rejection — §101, §103
Dec 19, 2022
Response after Non-Final Action
Jan 05, 2023
Response after Non-Final Action
Jan 18, 2023
Request for Continued Examination
Jan 20, 2023
Response after Non-Final Action
May 19, 2023
Non-Final Rejection — §101, §103
Aug 17, 2023
Applicant Interview (Telephonic)
Aug 18, 2023
Examiner Interview Summary
Aug 30, 2023
Response Filed
Nov 30, 2023
Final Rejection — §101, §103
Feb 15, 2024
Interview Requested
Feb 22, 2024
Examiner Interview Summary
Feb 22, 2024
Applicant Interview (Telephonic)
Mar 05, 2024
Response after Non-Final Action
Mar 23, 2024
Response after Non-Final Action
Apr 10, 2024
Request for Continued Examination
Apr 11, 2024
Response after Non-Final Action
Jul 18, 2024
Non-Final Rejection — §101, §103
Oct 03, 2024
Examiner Interview Summary
Oct 03, 2024
Applicant Interview (Telephonic)
Oct 25, 2024
Response Filed
Jan 29, 2025
Final Rejection — §101, §103
Apr 11, 2025
Response after Non-Final Action
Jun 02, 2025
Request for Continued Examination
Jun 05, 2025
Response after Non-Final Action
Sep 30, 2025
Non-Final Rejection — §101, §103
Jan 02, 2026
Response Filed
Feb 19, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12547958
SWAPPING TASK ASSIGNMENTS TO DETERMINE TASK SELECTION
2y 5m to grant Granted Feb 10, 2026
Patent 12524719
SYSTEMS AND METHODS FOR PREDICTING AND MANAGING TOOL ASSETS
2y 5m to grant Granted Jan 13, 2026
Patent 12493845
SYSTEMS AND METHODS FOR MULTI-CHANNEL CUSTOMER COMMUNICATIONS CONTENT RECOMMENDER
2y 5m to grant Granted Dec 09, 2025
Patent 12348826
HOTSPOT LIST DISPLAY METHOD, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Jul 01, 2025
Patent 12271852
SYSTEMS AND METHODS FOR MULTI-CHANNEL CUSTOMER COMMUNICATIONS CONTENT RECOMMENDER
2y 5m to grant Granted Apr 08, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

11-12
Expected OA Rounds
29%
Grant Probability
70%
With Interview (+41.1%)
3y 7m
Median Time to Grant
High
PTA Risk
Based on 126 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month