Prosecution Insights
Last updated: April 19, 2026
Application No. 18/596,037

CYBERSECURITY ENFORCEMENT USING SYNTHETIC PHISHING

Non-Final OA §103§112
Filed
Mar 05, 2024
Examiner
NIPA, WASIKA
Art Unit
2433
Tech Center
2400 — Computer Networks
Assignee
Capital One Services LLC
OA Round
3 (Non-Final)
75%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
226 granted / 302 resolved
+16.8% vs TC avg
Strong +30% interview lift
Without
With
+29.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
18 currently pending
Career history
320
Total Applications
across all art units

Statute-Specific Performance

§101
13.5%
-26.5% vs TC avg
§103
50.6%
+10.6% vs TC avg
§102
3.3%
-36.7% vs TC avg
§112
17.4%
-22.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 302 resolved cases

Office Action

§103 §112
Detailed Action The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . RCE filed on 01/21/2026 and claims filed on 01/12/2026 have been acknowledged. Claims 1-20 are currently pending and have been considered below. Claim 1, 7 and 14 are independent claim. Claim 1-2, 4-5, 7-8, 10-12, 14-15, 17-19 have been amended. Continued Examination under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/21/2026 has been entered. Priority No priority is claimed. Drawing The drawings filed on 03/05/2024 are accepted by the Examiner. Response to Arguments Applicant's arguments added in the amendment filed on 01/12/2026 have been fully considered but moot they are not persuasive. The reasons set forth below. On pages 8 of the argument filed on 01/12/2026, applicant argued that the amended limitation “wherein a second communication mode for a second synthetic phishing attempt is determined using a machine learning model”. Examiner respectfully disagrees. First communication mode is defined as e-mail, phone or text in claim 2. But second communication mode is not explicitly defined. Morris, col 23, line 55-65, col 24, line 5-15, if the user environment information shows that the user regularly accesses their devices each morning at 8 am, the system may generate a user specific phishing lure that is deployed to the user at or near 8 am. If the user environment information shows that the user is consuming content instead of creating content at particular times of the day or after taking particular actions, the anti-phishing system can deploy a phishing lure at those particular times of the day when the user is more likely to see the incoming digital communication. Thus generating user specific phishing lures from user environment information is mapped to second mode of communication. Col 17, line 5-10, user feedback can indicate when a user identifies as a threat a piece of digital communication that was not flagged as a threat by the system. This user feedback can be used to further refine the system, such as by improving one or more templates or further training one or more machine learning models. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1, 7 and 14 recite the limitation "wherein a second communication mode for a second synthetic phishing attempt". Claim 2, 8 and 15 further recite “wherein the first communications mode includes one or more of email, phone or text”. Specification [0010] discusses that “phishing attempts can occur via different communication modes (e.g., email, phone, text, or the like)”. But spec never differentiates how phishing attempt takes place in first mode and second mode. Claim 2, 8 and 15 clearly defines first communication mode includes email, phone or text. None of the claim defines what is included in second communication mode. Based on [0018]-[0020] of specification filed on 03/05/2024, Examiner understands that risk score is calculated based on email based communication or phone based communication. So first communication mode could be e-mail and second communication mode could be phone. So claim 1 is incomprehensible in light of the specification. Examiner doesn’t understand if email, text and phone are examples of first communication mode, then what else could be the examples of second communication mode. Because spec does not define anything explicitly for first and second mode. Thus claim is vague and indefinite. Dependent claim 3-6, 9-13 and 16-20 inherited the deficiencies from independent claim 1, 7 and 14 respectively. Thus the dependent claims are also rejected. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Adams (US Patent Application Publication No 2023/0336588 A1) in view of Morris (US Patent No 12,047,416 B1). Regarding Claim 1, Adams discloses a system for cybersecurity enforcement, the system comprising: one or more memories (Adams, Fig-1); and one or more processors, communicatively coupled to the one or more memories (Adams, Fig-1), configured to: generate a first synthetic phishing attempt that targets a user via a communication mode (Adams, Fig-2A, ¶[0033], the phishing lure generation platform may train a natural language model or algorithm to automatically produce synthetic phishing lures. The phishing lure generation model identifies formats of phishing lures that may be effective for a particular recipient. ¶[0038], in step 204, the phishing lure generation platform may send the simulated synthetic phishing lure to the user device); identify a user response or non-response to the synthetic phishing attempt (Adams, Fig-2B, ¶[0040], the target user may be tested on how well they may identify and avoid a phishing lure amidst their normal course of business. By displaying the simulated synthetic phishing lure alongside legitimate messages, the target user’s ability to distinguish between legitimate and phishing messages may be tested); and update, based at least in part on the communication mode and the user response or non-response, a risk score specific to the user (Adams, ¶[0044], ¶[0051], the inbound message filtering system may train the phish detection model using signals indicative of the automated generation of the synthetic lures which may be used to distinguish between synthetic and manually generated lures. ¶[0056], the inbound message filtering system may quarantine the inbound message, route the inbound message to an isolation environment and secure sandbox, modify one or more traffic filtering rules and execute other security actions). Adams does not explicitly discuss the following limitation that Morris teaches: wherein a second communication mode for a second synthetic phishing attempt is determined using a machine learning model, the determination based on the first synthetic fishing attempt (Morris, col 23, line 55-65, col 24, line 5-15, if the user environment information shows that the user regularly accesses their devices each morning at 8 am, the system may generate a user specific phishing lure that is deployed to the user at or near 8 am. If the user environment information shows that the user is consuming content instead of creating content at particular times of the day or after taking particular actions, the anti-phishing system can deploy a phishing lure at those particular times of the day when the user is more likely to see the incoming digital communication. Thus generating user specific phishing lures from user environment information is mapped to second mode of communication. Col 17, line 5-10, user feedback can indicate when a user identifies as a threat a piece of digital communication that was not flagged as a threat by the system. This user feedback can be used to further refine the system, such as by improving one or more templates or further training one or more machine learning models); and enable a security limitation on user interactions via the first communication mode based on the risk score, wherein the security limitation limits the user interactions via the first communication mode more than other user interactions via the second communication mode (Morris, col 13, line 50-55, the user asset risk score can be a score indicative of the level of risk of the user being susceptible to a threat. Col 14, Fig-2, line 45-60, deploying the threat abatement procedure can include generating and presenting an alert to a user. Col 21, line 45-60. Col 23, line 5-10). Adams in view of Morris are analogous art because they are from the “same field of endeavor” and are from the same “problem solving area”. Namely, they pertain to the field of “anti-phishing security and monitoring content to protect against phishing attack”. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the invention of Adams in view of Morris to include the idea of determining that the digital communication is a threat based at least in part on the user specific network behavior information associated with the assets and employing a threat-abatement procedure with respect to the digital communication (Morris, col 2, line 5-10). Regarding claim 2, Adams in view of Morris discloses the system of claim 1, wherein the first communication mode includes one or more of email, phone, or text (Adams, ¶[0039], the phishing lure generation platform may send the simulated synthetic phishing lure as an electronic message (e.g., email, text message, chat message and other electronic message). Also Morris, Fig-9, col 27, line 15-30). Regarding claim 3, Adams in view of Morris discloses the system of claim 1, wherein the one or more processors are further configured to: receive an indication that the user has opted in to synthetic phishing attempts (Adams, ¶[0040], the target user may be tested on how well they may identify and avoid a phishing lure amidst their normal course of business. By displaying the simulated synthetic phishing lure alongside legitimate messages, the target user’s ability to distinguish between the legitimate and phishing messages may be tested). Regarding claim 4, Adams in view of Morris discloses the system of claim 1, wherein the one or more processors are further configured to: generate the first synthetic phishing attempt based at least in part on a previous phishing attempt targeting the user (Adams, ¶[0044], the phishing lure generation platform may update the phishing lure generation model using the feedback information. ¶[0045], may leverage engagement feedback to dynamically refine models/algorithm). Regarding Claim 5, Adams in view of Morris discloses the system of claim 1, wherein the risk score is specific to the first communication mode (Morris, col 13, line 50-55, the user asset risk score can be a score indicative of the level of risk of the user being susceptible to a threat. Col 14, Fig-2, line 45-60, deploying the threat abatement procedure can include generating and presenting an alert to a user. Col 21, line 45-60. Col 23, line 5-10). Regarding Claim 6, Adams in view of Morris discloses the system of claim 1, wherein the one or more processors are further configured to: perform a cybersecurity action based at least in part on the risk score, wherein the cybersecurity action comprises enablement of the security limitation (Morris, col 13, line 50-55, the user asset risk score can be a score indicative of the level of risk of the user being susceptible to a threat. Col 14, Fig-2, line 45-60, deploying the threat abatement procedure can include generating and presenting an alert to a user. Col 21, line 45-60. Col 23, line 5-10). Regarding Claim 7, Adams discloses a method of cybersecurity enforcement, comprising: generating a first synthetic phishing attempt that targets a user via a first communication mode (Adams, Fig-2A, ¶[0033], the phishing lure generation platform may train a natural language model or algorithm to automatically produce synthetic phishing lures. The phishing lure generation model identifies formats of phishing lures that may be effective for a particular recipient. ¶[0038], in step 204, the phishing lure generation platform may send the simulated synthetic phishing lure to the user device); and updating, based at least in part on the first communication mode, a risk profile specific to the user (Adams, ¶[0044], ¶[0051], the inbound message filtering system may train the phish detection model using signals indicative of the automated generation of the synthetic lures which may be used to distinguish between synthetic and manually generated lures. ¶[0056], the inbound message filtering system may quarantine the inbound message, route the inbound message to an isolation environment and secure sandbox, modify one or more traffic filtering rules and execute other security actions). Adams does not explicitly discuss the following limitation that Morris teaches: wherein a second communication mode for a second synthetic phishing attempt is determined using a machine learning model, the determination based on the first synthetic fishing attempt (Morris, col 23, line 55-65, col 24, line 5-15, if the user environment information shows that the user regularly accesses their devices each morning at 8 am, the system may generate a user specific phishing lure that is deployed to the user at or near 8 am. If the user environment information shows that the user is consuming content instead of creating content at particular times of the day or after taking particular actions, the anti-phishing system can deploy a phishing lure at those particular times of the day when the user is more likely to see the incoming digital communication. Thus generating user specific phishing lures from user environment information is mapped to second mode of communication. Col 17, line 5-10, user feedback can indicate when a user identifies as a threat a piece of digital communication that was not flagged as a threat by the system. This user feedback can be used to further refine the system, such as by improving one or more templates or further training one or more machine learning models); and enable a security limitation on user interactions via the first communication mode based on the risk score, wherein the security limitation limits the user interactions via the first communication mode more than other user interactions via second communication mode (Morris, col 13, line 50-55, the user asset risk score can be a score indicative of the level of risk of the user being susceptible to a threat. Col 14, Fig-2, line 45-60, deploying the threat abatement procedure can include generating and presenting an alert to a user. Col 21, line 45-60. Col 23, line 5-10). Adams in view of Morris are analogous art because they are from the “same field of endeavor” and are from the same “problem solving area”. Namely, they pertain to the field of “anti-phishing security and monitoring content to protect against phishing attack”. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the invention of Adams in view of Morris to include the idea of determining that the digital communication is a threat based at least in part on the user specific network behavior information associated with the assets and employing a threat-abatement procedure with respect to the digital communication (Morris, col 2, line 5-10). Regarding Claim 8, Adams in view of Morris discloses the method of claim 7, wherein the first communication mode includes one or more of email, phone, or text (Adams, ¶[0039], the phishing lure generation platform may send the simulated synthetic phishing lure as an electronic message (e.g., email, text message, chat message and other electronic message). Also Morris, Fig-9, col 27, line 15-30). Regarding Claim 9, Adams in view of Morris discloses the method of claim 7, further comprising: receiving an indication that the user has opted in to synthetic phishing attempts (Adams, ¶[0040], the target user may be tested on how well they may identify and avoid a phishing lure amidst their normal course of business. By displaying the simulated synthetic phishing lure alongside legitimate messages, the target user’s ability to distinguish between the legitimate and phishing messages may be tested). Regarding Claim 10, Adams in view of Morris discloses the method of claim 7, further comprising: generating the first synthetic phishing attempt based at least in part on a previous phishing attempt targeting the user (Adams, ¶[0044], the phishing lure generation platform may update the phishing lure generation model using the feedback information. ¶[0045], may leverage engagement feedback to dynamically refine models/algorithm). Regarding Claim 11, Adams in view of Morris discloses the method of claim 7, wherein updating the risk profile includes updating the risk profile based at least in part on a user response or non-response to the first synthetic phishing attempt (Adams, ¶[0044], ¶[0051], the inbound message filtering system may train the phish detection model using signals indicative of the automated generation of the synthetic lures which may be used to distinguish between synthetic and manually generated lures. ¶[0056], the inbound message filtering system may quarantine the inbound message, route the inbound message to an isolation environment and secure sandbox, modify one or more traffic filtering rules and execute other security actions). Regarding Claim 12, Adams in view of Morris discloses the method of claim 7, wherein the risk profile includes a risk score that is specific to the first communication mode (Morris, col 13, line 50-55, the user asset risk score can be a score indicative of the level of risk of the user being susceptible to a threat. Col 14, Fig-2, line 45-60, deploying the threat abatement procedure can include generating and presenting an alert to a user. Col 21, line 45-60. Col 23, line 5-10). Regarding Claim 13, Adams in view of Morris discloses the method of claim 7, further comprising: performing a cybersecurity action based at least in part on the risk profile, wherein the cybersecurity action comprises enabling the security limitation (Morris, col 13, line 50-55, the user asset risk score can be a score indicative of the level of risk of the user being susceptible to a threat. Col 14, Fig-2, line 45-60, deploying the threat abatement procedure can include generating and presenting an alert to a user. Col 21, line 45-60. Col 23, line 5-10). Regarding Claim 14, Adams discloses a non-transitory computer-readable medium storing a set of instructions, the set of instructions comprising: one or more instructions that, when executed by one or more processors of a cybersecurity enforcement system, cause the cybersecurity enforcement system to (Adams, Fig-1): generate a first synthetic phishing attempt that targets a user via a first communication mode (Adams, Fig-2A, ¶[0033], the phishing lure generation platform may train a natural language model or algorithm to automatically produce synthetic phishing lures. The phishing lure generation model identifies formats of phishing lures that may be effective for a particular recipient. ¶[0038], in step 204, the phishing lure generation platform may send the simulated synthetic phishing lure to the user device); and update, based at least in part on the first communication mode, a risk score specific to the user (Adams, ¶[0044], ¶[0051], the inbound message filtering system may train the phish detection model using signals indicative of the automated generation of the synthetic lures which may be used to distinguish between synthetic and manually generated lures. ¶[0056], the inbound message filtering system may quarantine the inbound message, route the inbound message to an isolation environment and secure sandbox, modify one or more traffic filtering rules and execute other security actions). Adams does not explicitly discuss the following limitation that Morris teaches: wherein a second communication mode for a second synthetic phishing attempt is determined using a machine learning model, the determination based on the first synthetic fishing attempt (Morris, col 23, line 55-65, col 24, line 5-15, if the user environment information shows that the user regularly accesses their devices each morning at 8 am, the system may generate a user specific phishing lure that is deployed to the user at or near 8 am. If the user environment information shows that the user is consuming content instead of creating content at particular times of the day or after taking particular actions, the anti-phishing system can deploy a phishing lure at those particular times of the day when the user is more likely to see the incoming digital communication. Thus generating user specific phishing lures from user environment information is mapped to second mode of communication. Col 17, line 5-10, user feedback can indicate when a user identifies as a threat a piece of digital communication that was not flagged as a threat by the system. This user feedback can be used to further refine the system, such as by improving one or more templates or further training one or more machine learning models); and enable a security limitation on user interactions via the first communication mode based on the risk score, wherein the security limitation limits the user interactions via the first communication mode more than other user interactions via the second communication mode (Morris, col 13, line 50-55, the user asset risk score can be a score indicative of the level of risk of the user being susceptible to a threat. Col 14, Fig-2, line 45-60, deploying the threat abatement procedure can include generating and presenting an alert to a user. Col 21, line 45-60. Col 23, line 5-10). Adams in view of Morris are analogous art because they are from the “same field of endeavor” and are from the same “problem solving area”. Namely, they pertain to the field of “anti-phishing security and monitoring content to protect against phishing attack”. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the invention to modify the invention of Adams in view of Morris to include the idea of determining that the digital communication is a threat based at least in part on the user specific network behavior information associated with the assets and employing a threat-abatement procedure with respect to the digital communication (Morris, col 2, line 5-10). Regarding Claim 15, Adams in view of Morris discloses the non-transitory computer-readable medium of claim 14, wherein the first communication mode includes one or more of email, phone, or text (Adams, ¶[0039], the phishing lure generation platform may send the simulated synthetic phishing lure as an electronic message (e.g., email, text message, chat message and other electronic message). Also Morris, Fig-9, col 27, line 15-30). Regarding claim 16, Adams in view of Morris discloses the non-transitory computer-readable medium of claim 14, wherein the one or more instructions further cause the cybersecurity enforcement system to: receive an indication that the user has opted in to synthetic phishing attempts (Adams, ¶[0040], the target user may be tested on how well they may identify and avoid a phishing lure amidst their normal course of business. By displaying the simulated synthetic phishing lure alongside legitimate messages, the target user’s ability to distinguish between the legitimate and phishing messages may be tested). Regarding claim 17, Adams in view of Morris discloses the non-transitory computer-readable medium of claim 14, wherein the one or more instructions further cause the cybersecurity enforcement system to: generate the first synthetic phishing attempt based at least in part on a previous phishing attempt targeting the user (Adams, ¶[0044], the phishing lure generation platform may update the phishing lure generation model using the feedback information. ¶[0045], may leverage engagement feedback to dynamically refine models/algorithm). Regarding claim 18, Adams in view of Morris discloses the non-transitory computer-readable medium of claim 14, wherein the one or more instructions, that cause the cybersecurity enforcement system to update the risk score, cause the cybersecurity enforcement system to update the risk score based at least in part on a user response or non-response to the first synthetic phishing attempt (Morris, col 13, line 50-55, the user asset risk score can be a score indicative of the level of risk of the user being susceptible to a threat. Col 14, Fig-2, line 45-60, deploying the threat abatement procedure can include generating and presenting an alert to a user. Col 21, line 45-60. Col 23, line 5-10). Regarding claim 19, Adams in view of Morris discloses the non-transitory computer-readable medium of claim 14, wherein the risk score is specific to the first communication mode (Morris, col 13, line 50-55, the user asset risk score can be a score indicative of the level of risk of the user being susceptible to a threat. Col 14, Fig-2, line 45-60, deploying the threat abatement procedure can include generating and presenting an alert to a user. Col 21, line 45-60. Col 23, line 5-10). Regarding claim 20, Adams in view of Morris discloses the non-transitory computer-readable medium of claim 14, wherein the one or more instructions further cause the cybersecurity enforcement system to: perform a cybersecurity action based at least in part on the risk score, wherein the cybersecurity action comprises enablement of the security limitation (Morris, col 13, line 50-55, the user asset risk score can be a score indicative of the level of risk of the user being susceptible to a threat. Col 14, Fig-2, line 45-60, deploying the threat abatement procedure can include generating and presenting an alert to a user. Col 21, line 45-60. Col 23, line 5-10). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure (see PTO-Form 892). Any inquiry concerning this communication or earlier communications from the examiner should be directed to WASIKA NIPA whose telephone number is (571)272-8923. The examiner can normally be reached on M-F, 8 am to 5 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jeffrey Pwu can be reached on 571-272-6798. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WASIKA NIPA/ Primary Examiner, Art Unit 2433
Read full office action

Prosecution Timeline

Mar 05, 2024
Application Filed
Jul 03, 2025
Non-Final Rejection — §103, §112
Aug 29, 2025
Interview Requested
Sep 09, 2025
Applicant Interview (Telephonic)
Sep 10, 2025
Examiner Interview Summary
Sep 22, 2025
Response Filed
Nov 20, 2025
Final Rejection — §103, §112
Dec 11, 2025
Interview Requested
Dec 17, 2025
Applicant Interview (Telephonic)
Dec 17, 2025
Examiner Interview Summary
Jan 12, 2026
Response after Non-Final Action
Jan 21, 2026
Request for Continued Examination
Jan 28, 2026
Response after Non-Final Action
Feb 14, 2026
Non-Final Rejection — §103, §112
Mar 24, 2026
Examiner Interview Summary
Mar 24, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592965
SECURITY SCORING FOR TYPOGRAPHICAL ERRORS
2y 5m to grant Granted Mar 31, 2026
Patent 12587857
SIGNAL SPOOF DETECTION AT BASE STATIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12585807
AUTHORIZATION AUDIT FOR ACCESS TO PRIVILEGED USER DATA
2y 5m to grant Granted Mar 24, 2026
Patent 12587847
ENABLING COORDINATED IDENTITY MANAGEMENT BETWEEN AN OPERATOR-MANAGED MOBILE-EDGE PLATFORM AND AN EXTERNAL NETWORK
2y 5m to grant Granted Mar 24, 2026
Patent 12574367
ESTABLISHING A DATA SUBSCRIPTION FOR UTILITY USAGE INFORMATION
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
75%
Grant Probability
99%
With Interview (+29.7%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 302 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month