Prosecution Insights
Last updated: April 19, 2026
Application No. 17/536,281

SYSTEM AND METHOD FOR IDENTIFYING A PHISHING EMAIL

Final Rejection §103
Filed
Nov 29, 2021
Examiner
RASHID, HARUNUR
Art Unit
2497
Tech Center
2400 — Computer Networks
Assignee
Ao Kaspersky Lab
OA Round
6 (Final)
76%
Grant Probability
Favorable
7-8
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
473 granted / 620 resolved
+18.3% vs TC avg
Strong +37% interview lift
Without
With
+36.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
25 currently pending
Career history
645
Total Applications
across all art units

Statute-Specific Performance

§101
12.3%
-27.7% vs TC avg
§103
59.2%
+19.2% vs TC avg
§102
5.0%
-35.0% vs TC avg
§112
8.0%
-32.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 620 resolved cases

Office Action

§103
DETAILED ACTION 1. Claims 1-4, 6-13, 15-22, 24-27 are pending in this examination. Notice of Pre-AIA or AIA Status 2.1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 2.2. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Response to Arguments 3.1. Applicant’s arguments (Non-Compliant or Non-Responsive Amendment) filed 12/10/2025 have been fully considered but they are not persuasive. 3.2. Applicant’s Response applicant argues, in substance that the claims of the Applicant's 8/11/25 Amendment do not introduce a separate invention but rather further define the invention originally claimed. Per MPEP 802.01, two inventions are considered distinct only if they are patentably unrelated, meaning there is no substantial overlap in their structure, function, or inventive concept. In this case, the claims of the Applicant's 8/11/25 Amendment maintain the same core inventive concept as the invention originally claimed. The claims of the Applicant's 8/11/25 Amendment: "Do not introduce a new statutory category of invention (e.g., changing from an apparatus to a method claim). "Do not introduce an entirely new embodiment or field of use that was not already disclosed. "Do not claim subject matter that is patentably distinct under the two-way test required for distinctness in restriction practice (MPEP 806), which requires that (1) the amended claims be patentably distinct from the original claims, and (2) vice versa. The claims of the Applicant's 8/11/25 Amendment maintain the same core inventive concept and should be examined together. MPEP 802.01 further states that inventions are not distinct if they are linked by a common inventive feature, meaning that: 1. The same technical problem is being addressed. 2. The solution to the problem is based on the same underlying principle. 3. The amendments merely narrow, specify, or further elaborate on the original claimed subject matter rather than defining a new invention.” -claim invention dated 3/14/2025 would not infringe a second claim invention dated 8/11/2025, similarly the second claim invention dated 8/11/2025 would not infringe the claim invention dated 3/14/2025. 3.3. The Examiner respectfully disagrees with Applicant’s arguments; Restriction for examination purposes as indicated is proper because all these inventions listed in this action are independent or distinct for the reasons given above and there would be a serious search and examination burden if restriction were not required because one or more of the following reasons apply: (a) the inventions have acquired a separate status in the art in view of their different classification; (b) the inventions have acquired a separate status in the art due to their recognized divergent subject matter; (c) the inventions require a different field of search (for example, searching different classes/subclasses or electronic resources, or employing different search queries); (d) the prior art applicable to one invention would not likely be applicable to another invention; (e) the inventions are likely to raise different non-prior art issues under 35 U.S.C. 101 and/or 35 U.S.C. 112, first paragraph. 3.4. Furthermore, Applicant ignores the real world conditions under which examiners work." Rohm & Haas Co. v. Crystal Chemical Co., 722 F.2d 1556, 1573 [ 220 USPQ 289] (Fed. Cir. 1983), cert. denied, 469 U.S. 851 (1984). (Emphasis in original). 3.5. The Non-Compliant or Non-Responsive Amendment is hereby withdrawn. Response to Arguments 4. Applicant's arguments have been considered but are moot in view of the new ground(s) of rejection. Claim Rejections - 35 USC § 103 5.1. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. 5.2. Claims 1-4, 8-13, 17-22, and 26-27 are rejected under 35 U.S.C. 103 as being unpatentable over US Patent Application No. 20200389486 to Jeyakumar et al (“Jeyakumar”), in view of US Patent Application No. 20200374313 to Manoselvam et al (“Manoselvam”), and in view of US Patent Application No. 20220038903 to Fu et al (“Fu”). As per claim 1, Jeyakumar discloses method for identifying a phishing email message, the method comprising, pre-trained on first attributes of email messages ([0069], the threat detection platform 214 may apply a first model 204 to the email to produce a first output indicative of whether the email is representative of a malicious email. [2011]-[00217], the threat detection platform can build a personalized ML model for the employee based on the first portion of the first data (step 2403). For example, the threat detection platform may parse each email included in the first data to discover one or more attributes, and then the threat detection platform can provide these attributes to the ML model as input for training. Examples of attributes include the sender name, sender email address, subject, etc. Because the personalized ML model is trained using past emails received by the employee, normal communication habits can be established immediately upon deployment., also see [0088], figs. 6, 24-25 and associated texts, ([0070]-[0071], Each model in the ensemble may be associated with a different type of security threat. For example, the ensemble may include separate models for determining whether the email includes a query for data/funds, a link to a Hypertext Markup Language (HTML) resource, an attachment, etc. As further discussed below, the second model 208 may be designed to establish different facets of the security threat responsive to a determination that the email is likely malicious. For instance, the second model 208 may discover facets of the security threat such as the strategy, goal, impersonated party, vector, and attacked party, and then upload this information to a profile associated with the intended recipient and/or the enterprise..., [0071] Then, the threat detection platform 214 may apply a third model 210 designed to convert the output produced by the second model 208 into a comprehensible visualization component 212, [0212]-[0217], also see [0088], figs. 6, 24-25 and associated texts); taking an action to provide information security against the identified phishing message ([0104]-[0105] The remediation engine 314 optionally operates to perform one or more remediation processes. The remediation engine 314 is preferably implemented in response to communication classification as an attack (e.g., by one or more analysis modules 312, by the master detector, etc.), but can alternatively or additionally be implemented at any other suitable time. In some embodiments, the remediation steps are based on or correlate to a customer remediation policy). Jeyakumar does not explicitly disclose however in the same field of endeavor, Manoselvam discloses identifying an email message as a suspicious email message by applying a first machine learning model of each of the first attribute is encoded by a fixed- length vector of numbers ([0042]-[0043])In some implementations, the set of trusted URLs (e.g., 1,400,000 trusted URLs) is provided as training data to train the ML model. [0043] FIG. 4 depicts an example conceptual architecture 400 in accordance with implementations of the present disclosure. In the example of FIG. 4, an encoder 402, a ML model 404, and an error evaluation module 406 are provided. In some examples, the encoder 402 processes a set of redirected URLs 410 to provide a set of encoded URLs 412 that is processed to provide an output 414. The output 414 is provided to the error evaluation module 406, which provides an error value. For example, the output 414 is compared to the input to the ML model 404 (e.g., the encoded URLs 412) and the error value is determined based on a difference therebetween. In some examples, the error value can be determined as a mean-square-error (MSE) value. In some examples, a higher error value indicates a larger difference between the output 414 and the input to the ML model 404, and a lower error value a smaller difference between the output 414 and the input to the ML model 404. In some implementations, the encoder 402 independently encodes each redirected URL in the set of redirected URLs 410 to provide encoded redirected URLs. In some examples, the encoder 402 sequences the encoded redirected URLs together to provide the set of encoded URLs. Example encoding includes, without limitation, one-hot encoding, which converts raw (categorical) data into a matrix for efficient computation. [0044] In some implementations, the ML model 404 is provided as an autoencoder having multiple layers. In some examples, the autoencoder can be described as a neural network that is trained using unsupervised learning by applying backpropagation, where output values are to be equal to input values. In short, during training, the autoencoder learns a function that enables the input to be recreated as the output. In the example of FIG. 4, the ML model 404 includes an embedding layer, an encoding layer, an encoded URL layer, and a decoding layer. In some examples, the embedding layer embeds the encoded URLs 412 in a multi-dimensional vector space. In some examples, the encoding layer is provided as a bidirectional long short-term memory (LSTM) encoder, and the decoding layer is provided as a bidirectional LSTM decoder. In general, the encoder-decoder layers can be collectively described as a recurrent neural network (RNN) that provides sequence-to-sequence prediction (e.g., forecasting next values in a sequence of values). In general, the encoding layer reads an input sequence from the embedding layer, and encodes the input sequence into a fixed-length vector. The decoding layer decodes the fixed-length vector and outputs a predicted sequence (e.g., as the output 414), also see [0034]); and is transformed using a neural network configured to calculate a degree of similarity of the first attributes with attributes of suspicious messages ([0043] FIG. 4 depicts an example conceptual architecture 400 in accordance with implementations of the present disclosure. In the example of FIG. 4, an encoder 402, a ML model 404, and an error evaluation module 406 are provided. In some examples, the encoder 402 processes a set of redirected URLs 410 to provide a set of encoded URLs 412 that is processed to provide an output 414. The output 414 is provided to the error evaluation module 406, which provides an error value. For example, the output 414 is compared to the input to the ML model 404 (e.g., the encoded URLs 412) and the error value is determined based on a difference therebetween. In some examples, the error value can be determined as a mean-square-error (MSE) value. In some examples, a higher error value indicates a larger difference between the output 414 and the input to the ML model 404, and a lower error value a smaller difference between the output 414 and the input to the ML model 404. In some implementations, the encoder 402 independently encodes each redirected URL in the set of redirected URLs 410 to provide encoded redirected URLs. In some examples, the encoder 402 sequences the encoded redirected URLs together to provide the set of encoded URLs. Example encoding includes, without limitation, one-hot encoding, which converts raw (categorical) data into a matrix for efficient computation, also see [0016]-[0017]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Jeyakumar with the teaching of Manoselvam by including the feature of into a suspicious messages, in order for Jeyakumar’s system to controlling the providing an indicator assigned to the URL based on the error value, the indicator indicating whether the URL is determined to be potentially malicious. Implementations include receiving, by a redirection resolver, a URL identifying a location of a network resource, processing, by the redirection resolver, the URL to provide a set of results including a set of redirection URLs, the set of redirection URLs including one or more redirections between the URL and an end URL, processing the set of redirection URLs to provide input to a machine learning (ML) model that generates an output based on the set of redirection URLs, determining an error value associated with the URL, and providing an indicator assigned to the URL based on the error value, the indicator indicating whether the URL is determined to be potentially malicious (Manoselvam, abstract). Jeyakumar and Manoselvam do not explicitly disclose however in the same field of endeavor, Fu disclose identifying the suspicious email message as a phishing message by applying a second machine learning model ([0041] The abnormal message detection module can include a machine-learning model that can be trained, based on a training set of prior normal messages, to detect whether the input vectors only include normal messages and do not include abnormal messages (e.g., malicious messages, or other messages that deviate from the feature patterns derived from the training set of prior normal messages). Specifically, the machine learning model can include an autoencoder which can include a pair of encoder and decoder. The encoder can include a first neural network having a first set of weights. As part of an encoding operation, the encoder can combine the first set of weights with the input vectors to generate intermediate vectors having a reduced dimension (e.g., a reduced number of elements) compared with the input vectors. Moreover, the decoder can include a second neural network having a second set of weights. As part of a reconstruction operation, the decoder can combine the second set of weights with the intermediate vectors to generate output vectors as a reconstructed version of the input vectors. The machine learning model can also output a reconstruction loss between the input vectors and the output vectors to the message handling module). Furthermore, Fu also discloses taking an action to provide information security against the identified phishing message ([0056], NSS 200 can take various actions, such as trapping/discarding the plurality of messages, sending a notification indicating that abnormal/potentially malicious messages are received, etc). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Jeyakumar with the teaching of Manoselvam/ Fu by including the feature of into a second machine learning model, in order for Jeyakumar’s system to controlling the malicious network packets, the malicious network packets can compromise the operations of the ECUs, which can put the passengers of the vehicle in danger or otherwise adversely affect the operation of the vehicle. In one example, a method comprises: receiving, at a gateway of a Controller Area Network (CAN) bus on a vehicle and via at least one of a wireless interface or a wired interface of the vehicle, a plurality of messages targeted at electronic control units (ECU) on the CAN bus; generating one or more input vectors based on the plurality of messages; generating, using one or more machine learning models, an output vector based on each of the one or more input vectors, each input vector having the same number of elements as the corresponding output vector; generating one or more comparison results between each of the one or more input vectors and the corresponding output vector; and based on the one or more comparison results, performing one of: allowing the plurality of messages to enter the CAN bus or preventing the plurality of messages from entering the CAN bus (Fu, abstract). As per claim 2, the combination of Jeyakumar, Manoselvam, and Fu discloses the method of claim 1, further comprising: placing the suspicious email message into the temporary quarantine by using an email filter (Jeyakumar, [0066],[00217]). As per claim 3, the combination of Jeyakumar, Manoselvam, and Fu discloses the method of claim 1, wherein the first attributes comprise at least attributes related to a value of a Message_ID header of the email message; a value of an X-mail email header of the email message; and a sequences of value of header of the email messages (Jeyakumar, [0088], [0136]). The motivation regarding the obviousness of claim 1 is also applied to claim 3. As per claim 4, the combination of Jeyakumar, Manoselvam, and Fu discloses the method of claim 1, wherein the second machine learning model is pre-trained on second attributes of email messages, the second attributes further comprise comprising attributes related to at least one of: a reputation of a plurality of links which characterizes a probability that an email message contains a phishing link; a category of the email message; a flag indicating a presence of a domain of a sender in a previously created list of blocked senders; a flag indicating a presence of a domain of a sender in a previously created list of known senders; and a degree of similarity of a domain of a sender with domains in a previously created list of known senders; a flag indicating a presence of an Hyper-Text Markup Language (HTML) code in a body of the email message; and a flag indicating a presence of a script inserted in a body of the email, wherein the reputation of the plurality of links is calculated using a recurrent neural network (Fu, [0067], [0041]-[0044], [0078]). The motivation regarding the obviousness of claim 1 is also applied to claim 4. As per claim 8, the combination of Jeyakumar, Manoselvam, and Fu discloses the method of claim 1, wherein the second machine learning model is based on at least one of the following learning algorithms: an algorithm based on a Bayesian classifier; a logistical regression algorithm; a modified random forest training algorithm; a support vector machine; an algorithm using nearest neighbor; and a decision tree based algorithm (FU, [0041]-[0044]). The motivation regarding the obviousness of claim 1 is also applied to claim 8. As per claim 9, the combination of Jeyakumar, Manoselvam, and Fu discloses the method of claim 1, wherein the taking of the action to provide information security against the identified phishing message comprises at least one of: blocking the phishing message; informing a recipient that the email message is a phishing message; and placing an identifier of phishing email in a database storing a list of malicious emails (Jeyakumar, [0104]-[0105]). The motivation regarding the obviousness of claim 1 is also applied to claim 9. Claims 10-13, 17-22, and 26-27 are rejected for similar reasons as stated above. 5.3. Claims 6-7, 15-16 and 24-25 are rejected under 35 U.S.C. 103 as being unpatentable over Jeyakumar, Manoselvam and Fu as applied to claim above, and in view of US Patent Application No. 20200067861 to Leddy et al (“Leddy”). As per claim 6, the combination of Jeyakumar, Manoselvam and Fu discloses the invention as described above. Jeyakumar, Manoselvam and Fu do not explicitly disclose however, In the same field of endeavor, Leddy discloses the method of claim 1, wherein a category of the email message indicating whether or not the email message is a phishing message is based on N-grams of text of the email message, the N-grams being identified by selecting one or more important features that strongly influence a binary classification of the phishing email message ([1656], [1662], also see [0300]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Jeyakumar with the teaching of Leddy/Manoselvam and Fu by including the feature of action, in order for Jeyakumar’s system that is configured to pre-validate electronic communications before they are seen by users. In some embodiments, the system described herein is an automated adaptive system that can protect users against evolving scams and preventing immediate financial loss, credit or debit account fraud, and/or identity theft. Perpetrators of scams (scammers) use a variety of evolving scenarios including fake charities, fake identities, fake accounts, promises of romantic interest, and fake emergencies. These scams can result in direct immediate financial loss, credit or debit account fraud, and/or identity theft. It is often very difficult for potential victims to identify scams because the messages are intended to invoke an emotional response such as “Granddad, I got arrested in Mexico”, “Can you help the orphans in Haiti?” or “Please find attached our invoice for this month.” In addition, these requests often appear similar to real requests so it can be difficult for an untrained person to distinguish scam messages from legitimate sources (Leddy). As per claim 7, the combination of Singh, Jeyakumar, Goodman, Young and Leddy discloses the method of claim 1, wherein a category of the email message indicating whether or not the email message is a phishing message is based on a logic regression algorithm with regularization (Jeyakumar, [0069]-[0071],[0074]), Jeyakumar, Manoselvam and Fu, do not explicitly disclose however in the same field of endeavor, Leddy discloses wherein the regularization allows weight coefficients to be determined for N-grams, the weight coefficient of a given N-gram characterizing a degree of influence of the N-gram on a classification of the email message as a phishing message (Leddy, [1656], [1662], also see [0300]). The motivation regarding the obviousness of claim 6 is also applied to claim 7. Claim 15-16 and 24-25 are rejected for similar reasons as stated above. 6. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. a). US Patent Application No. 20210168161 to Dunn et al discloses a cyber-threat defense system for a network including its email domain protects this network from cyber threats. Modules utilize machine learning models as well communicate with a cyber threat module. Modules analyze the wide range of metadata from the observed email communications. The cyber threat module analyzes with the machine learning models trained on a normal behavior of email activity and user activity associated with the network and in its email domain in order to determine when a deviation from the normal behavior of email activity and user activity is occurring. A mass email association detector determines a similarity between highly similar emails being i) sent from or ii) received by a collection of two or more individual users in the email domain in a substantially simultaneous time frame. Mathematical models can be used to determine similarity weighing in order to derive a similarity score between compared emails. b). US Patent Application No. 20140215617 to Smith et al discloses a system and a method for advanced malware analysis. The method filters incoming messages with a watch-list, the incoming messages including attachments, if an incoming message matches the watch-list, forwards the message to a malware detection engine, strips the attachments from the forwarded message, the one or more attachments including one or more executable files, launches a plurality of sandboxes, executes each of the executable files in the plurality of sandboxes, the sandboxes generating analysis results that may be used to determine whether each executable file is malicious, normalizes the analysis results, evaluates the risk level of the attachments to the forwarded message based on the normalized analysis results of the executable files in the attachments to the forwarded message, and, if the risk level of an attachment to the forwarded message is above a certain level, determines that the forwarded message is malicious and permanently quarantines the forwarded message. Conclusion 7. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HARUNUR RASHID whose telephone number is (571)270-7195. The examiner can normally be reached 9 AM to 5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Eleni A. Shiferaw can be reached at (571) 272-3867. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. HARUNUR . RASHID Primary Examiner Art Unit 2497 /HARUNUR RASHID/ Primary Examiner, Art Unit 2497
Read full office action

Prosecution Timeline

Nov 29, 2021
Application Filed
Nov 04, 2023
Non-Final Rejection — §103
Feb 01, 2024
Response Filed
May 03, 2024
Final Rejection — §103
Jun 27, 2024
Response after Non-Final Action
Jul 08, 2024
Examiner Interview (Telephonic)
Jul 09, 2024
Response after Non-Final Action
Jul 18, 2024
Request for Continued Examination
Jul 24, 2024
Response after Non-Final Action
Jul 27, 2024
Non-Final Rejection — §103
Oct 04, 2024
Response Filed
Jan 11, 2025
Final Rejection — §103
Mar 14, 2025
Response after Non-Final Action
Apr 09, 2025
Request for Continued Examination
Apr 22, 2025
Response after Non-Final Action
May 07, 2025
Non-Final Rejection — §103
Aug 11, 2025
Response Filed
Aug 11, 2025
Response after Non-Final Action
Dec 10, 2025
Response Filed
Apr 03, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603869
PRIVACY SOLUTION FOR IMAGES LOCALLY GENERATED AND STORED IN EDGE SERVERS
2y 5m to grant Granted Apr 14, 2026
Patent 12603758
METHOD, APPARATUS, AND COMPUTER PROGRAM FOR SETTING ENCRYPTION KEY IN WIRELESS COMMUNICATION SYSTEM, AND RECORDING MEDIUM FOR SAME
2y 5m to grant Granted Apr 14, 2026
Patent 12593211
SELECTIVE VEHICLE SECURITY LOG DATA COMMUNICATION CONTROL
2y 5m to grant Granted Mar 31, 2026
Patent 12592952
GRAPHICS PROCESSING UNIT OPTIMIZATION
2y 5m to grant Granted Mar 31, 2026
Patent 12578927
METHOD FOR CALCULATING A TRANSITION FROM A BOOLEAN MASKING TO AN ARITHMETIC MASKING
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+36.9%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 620 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month