Prosecution Insights
Last updated: April 19, 2026
Application No. 18/745,255

INFORMATION SECURITY COMPLIANCE PLATFORM

Non-Final OA §103§DP
Filed
Jun 17, 2024
Examiner
TOLENTINO, RODERICK
Art Unit
2439
Tech Center
2400 — Computer Networks
Assignee
Clearops Inc.
OA Round
1 (Non-Final)
77%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
545 granted / 705 resolved
+19.3% vs TC avg
Strong +35% interview lift
Without
With
+35.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
25 currently pending
Career history
730
Total Applications
across all art units

Statute-Specific Performance

§101
15.7%
-24.3% vs TC avg
§103
56.2%
+16.2% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
8.3%
-31.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 705 resolved cases

Office Action

§103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Detailed Action Office Action is in response to the instant Application 18/745,255 filed on 6/17/2024 and Preliminary Amendments filed on 9/27/2024. Claims 1-20 were cancelled and claims 21-40 were added in the Preliminary Amendment. Claims 21-40 are pending. This Office Action is Non-Final. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 21 and 31 rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1 and 12 of U.S. Patent No. 12,015,648. Although the claims at issue are not identical, they are not patentably distinct from each other because all the limitations of the instant Application are anticipated by U.S. Patent No. 12,015,648. See Table Below: Instant Application U.S. Patent No. 12,015,648 21. (New) A computer-implemented method for remediating vendor non-compliance with information and security criteria, the method comprising: automatically evaluating, by at least one computing device comparing a predetermined standard or threshold of at least one respective aspect of information and security criteria with vendor information obtained from at least one other computing device, that a vendor is not compliant with the information and security criteria; automatically determining, by at least one computing device, remedial action which, when completed, would bring the vendor in compliance with the at least one respective aspect of information and security criteria; automatically receiving, by at least one computing device from at least one other computing device, information representing the remedial action had been completed; automatically determining, by at least one computing device as a function of the information representing the completion of the remedial action, the vendor being in compliance with the at least one respective aspect of information and security criteria; automatically generating, by at least one computing device, a report representing the vendor being in compliance with the information and security criteria; and automatically transmitting, by at least one computing device to at least one other computing device, the report. 31. (New) A computer-implemented system for remediating vendor non-compliance with information and security criteria, the system comprising: at lone computing device configured to execute programming instructions stored on non- transitory processor readable media which, when executed, configure the at least one computing device to: automatically evaluate by comparing a predetermined standard or threshold of at least one respective aspect of information and security criteria with vendor information obtained from at least one other computing device, that a vendor is not compliant with the information and security criteria; automatically determine remedial action which, when completed, would bring the vendor in compliance with the at least one respective aspect of information and security criteria; automatically receive from at least one other computing device, information representing the remedial action had been completed; automatically determine, as a function of the information representing the completion of the remedial action, the vendor being in compliance with the at least one respective aspect of information and security criteria; automatically generate a report representing the vendor being in compliance with the information and security criteria; and automatically transmit to at least one other computing device the report. 1. A computer-implemented method to monitor and determine vendor compliance with information and security criteria, the method comprising: automatically evaluating, by at least one computing device, the compliance of each of a plurality of vendors that provide a good and/or service with the information and security criteria, wherein the evaluating includes comparing a predetermined standard or threshold of at least one respective aspect of the information and security criteria and at least some of the information obtained from a plurality of remotely located computing devices; See Dependent claims 2 and 3 below See Dependent claims 2 and 3 below automatically providing, by at least one computing device via a graphical user interface, selectable options including vendor information representing each of the plurality of vendors, wherein the vendor information represents at least one change to a respective vendor account and a representation of risk associated with the at least one change; automatically providing, by at least one computing device in response to receiving a selection of one of the selectable options, compliance information representing at least one respective aspect of the information and security criteria; determining, by at least one computing device based at least on an evaluation of the respective aspect, whether the vendor is in compliance or is out of compliance with the information and security criteria; and automatically transmitting, by at least one computing device to a remotely located computing device, a report identifying that the vendor is in compliance or out of compliance with the information and security criteria. 2. The computer-implemented method of claim 1, further comprising: where the vendor is determined to be out of compliance with the information and security criteria: determining, by at least one computing device, at least one remedial action to bring the vendor in compliance with the at least one aspect. 3. The computer-implemented method of claim 2, further comprising: identifying, by at least one computing device, that the at least one remedial action has been taken; determining, by at least one computing device that the vendor is in compliance with the information and security criteria; and automatically transmitting, by at least one computing device to a remotely located computing device, a report identifying that the vendor is in compliance with the information and security criteria. 12. A computer-implemented system to monitor and determine vendor compliance with information and security criteria, the system comprising: at least one computing device configured by executing code to perform steps, including to: automatically evaluate the compliance of each of a plurality of vendors that provide a good and/or service with the information and security criteria, wherein the evaluating includes comparing a predetermined standard or threshold of at least one respective aspect of the information and security criteria and at least some of the information obtained from a plurality of remotely located computing devices; automatically provide via a graphical user interface, selectable options including vendor information representing each of the plurality of vendors, wherein the vendor information represents at least one change to a respective vendor account and a representation of risk associated with the at least one change; See Dependent Claims 13 and 14 Below See Dependent Claims 13 and 14 Below automatically provide, in response to receiving a selection of one of the selectable options, compliance information representing at least one respective aspect of the information and security criteria; determine, based at least on an evaluation of the at least one respective aspect, whether the vendor is in compliance or is out of compliance with the information and security criteria; and automatically transmit, to a remotely located computing device, a report identifying that the vendor is in compliance or out of compliance with the information and security criteria. 13. The computer-implemented system of claim 12, in the event that the at least one computing device determines that the vendor is out of compliance with the information and security criteria: the at least one computing device further configured to determine at least one remedial action to bring the vendor in compliance with the at least one respective aspect. 14. The computer-implemented system of claim 13, further comprising: the at least one computing device further configured to identify that the at least one remedial action has been taken; the at least one computing device further configured to determine that the vendor is in compliance with the information and security criteria; and the at least one computing device further configured to automatically transmit, to a remotely located computing device, a report identifying that the vendor is in compliance with the information and security criteria. Regarding claims 22-30 and 32-40; claims 22-30 and 32-40 are also rejected under Double Patenting for similar reasons respectively and are dependent on claims 21 and 31 and therefore inherit the rejection from issues of the independent claims. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 21, 22, 24-32 and 34-40 is/are rejected under 35 U.S.C. 103 as being unpatentable over Grimm et al. (US 2019/0312887) in view of Gleichauf et al. (US 9,436,820). As per claim 21, Grimm teaches a computer-implemented method for remediating vendor non-compliance with information and security criteria, the method comprising: automatically evaluating, by at least one computing device comparing a predetermined standard or threshold of at least one respective aspect of information and security criteria with vendor information obtained from at least one other computing device, that a vendor is not compliant with the information and security criteria (Grimm, Paragraph 0167 recites “As shown in step 1008, the method 1000 may include monitoring the endpoint using any of the techniques described herein including, without limitation, behavioral monitoring, signature-based monitoring (such as detecting malware executing on the endpoint with an antivirus scanner), network traffic monitoring, and any other techniques or combinations of the foregoing suitable for detecting the presence of malware or other compromised conditions on the endpoint.” And Paragraph 0168 recites “As shown in step 1010, the method 1000 may include detecting a compromised state as a result of the monitoring. If no compromised state is detected, the method 1000 may return to step 1008 where monitoring of the endpoint continues using any suitable techniques. When a compromised state is detected, the method 1000 may, in response to the compromised state, proceed to step 1012 for further processing.”); automatically determining, by at least one computing device, remedial action which, when completed, would bring the vendor in compliance with the at least one respective aspect of information and security criteria (Grimm, Paragraph 0170 recites “As shown in step 1014, the method 1000 may include remediating the endpoint using any suitable techniques. In one aspect, this may include notifying a network device for the enterprise network, such as a gateway, firewall, router, or threat management facility, of the compromised state in order to facilitate management of more general security measures across the enterprise network. This may also or instead include deploying local security measures to identify and address the source of a compromise, such as by scanning the endpoint for affected files. In another aspect, this may include receiving and installing/executing malware removal tools from a threat management facility, or otherwise taking steps alone or in cooperation with other security infrastructure for the enterprise network to remediate the endpoint.”). But fails to teach automatically receiving, by at least one computing device from at least one other computing device, information representing the remedial action had been completed; automatically determining, by at least one computing device as a function of the information representing the completion of the remedial action, the vendor being in compliance with the at least one respective aspect of information and security criteria; automatically generating, by at least one computing device, a report representing the vendor being in compliance with the information and security criteria; and automatically transmitting, by at least one computing device to at least one other computing device, the report. However, in an analogous art Gleichauf teaches automatically receiving, by at least one computing device from at least one other computing device, information representing the remedial action had been completed; automatically determining, by at least one computing device as a function of the information representing the completion of the remedial action, the vendor being in compliance with the at least one respective aspect of information and security criteria; automatically generating, by at least one computing device, a report representing the vendor being in compliance with the information and security criteria; and automatically transmitting, by at least one computing device to at least one other computing device, the report (Gleichauf, Col. 17 Line 60 – Col. 18 Line11 recites “In another example, assume that a user device 22 establishes a connection with a network resource 25 in the network 24, via the data communications device 26. Further assume that the user device 22 is not deemed by the policy server 28 to be compliant with security policy and is given restricted access to network resources 25 sufficient only to remediate itself. Before the remediation is complete, the user device 22 will respond to posture update queries from the data communications device 22 in a way that indicates no posture change. However, once remediation has completed successfully, the posture agent 30 will indicate that the posture has changed on the next posture update query from the data communications device 26. In such an arrangement, when the data communications device 26 transmits the posture update query to the user device 22, the user device 22 indicates the change in configuration of the operating system and anti-virus application to the data communications device 26 in a query response. When the data communications device reviews the query response and detects the change in posture, the data communications device 26 triggers a full posture credential challenge-response sequence as explained in system operation described in FIG. 1.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Gleichauf’s Controlling Access To Resources In A Network with Grimm’s secure endpoint in a heterogenous enterprise network because it offers the advantage of helping inform a network when an entity in the network is safe to communicate with. As per claim 22, Grimm in combination with Gleichauf teaches the method of claim 21, Grimm further teaches wherein the information and security criteria regards at least one of secure network presence, regulatory criteria, intellectual property, data management, policy management, and jurisdictional requirements (Grimm, Paragraph 0167 recites “As shown in step 1008, the method 1000 may include monitoring the endpoint using any of the techniques described herein including, without limitation, behavioral monitoring, signature-based monitoring (such as detecting malware executing on the endpoint with an antivirus scanner), network traffic monitoring, and any other techniques or combinations of the foregoing suitable for detecting the presence of malware or other compromised conditions on the endpoint.” And Paragraph 0168 recites “As shown in step 1010, the method 1000 may include detecting a compromised state as a result of the monitoring. If no compromised state is detected, the method 1000 may return to step 1008 where monitoring of the endpoint continues using any suitable techniques. When a compromised state is detected, the method 1000 may, in response to the compromised state, proceed to step 1012 for further processing.”). As per claim 24, Grimm in combination with Gleichauf teaches the method of claim 21, Grimm further teaches after evaluating the vendor not being in compliance with the information and security criteria, automatically denying the vendor access, by at least one computing device, to at least one data source (Grimm, Paragraph 0169 recites “As shown in step 1012, the method 1000 may include isolating the endpoint. This may, for example, include self-isolating wherein the endpoint restricts communications between the endpoint and one or more other endpoints on the enterprise network, such as one or more other endpoints on a subnet of the enterprise network used by the endpoint, or more generally, other endpoints throughout the enterprise network.”). As per claim 25, Grimm in combination with Gleichauf teaches the method of claim 24, Grimm further teaches wherein denying the vendor access further comprises at least one of: automatically revoking, by at least one computing device, at least one application programming interface (“API”) key; automatically suspending, by at least one computing device, a domain name server (“DNS”) registration; and automatically disabling, by at least one computing device, at least one computing device’s access to at least one public resource provided by the vendor (Grimm, Paragraph 0169 recites “As shown in step 1012, the method 1000 may include isolating the endpoint. This may, for example, include self-isolating wherein the endpoint restricts communications between the endpoint and one or more other endpoints on the enterprise network, such as one or more other endpoints on a subnet of the enterprise network used by the endpoint, or more generally, other endpoints throughout the enterprise network.”). As per claim 26, Grimm in combination with Gleichauf teaches the method of claim 21, Grimm further teaches after evaluating the vendor not being in compliance with the information and security criteria, automatically providing, by at least one computing device to at least one other computing device, compliance information representing at least one respective aspect of the information and security criteria (Grimm, Paragraph 0167 recites “As shown in step 1008, the method 1000 may include monitoring the endpoint using any of the techniques described herein including, without limitation, behavioral monitoring, signature-based monitoring (such as detecting malware executing on the endpoint with an antivirus scanner), network traffic monitoring, and any other techniques or combinations of the foregoing suitable for detecting the presence of malware or other compromised conditions on the endpoint.” And Paragraph 0168 recites “As shown in step 1010, the method 1000 may include detecting a compromised state as a result of the monitoring. If no compromised state is detected, the method 1000 may return to step 1008 where monitoring of the endpoint continues using any suitable techniques. When a compromised state is detected, the method 1000 may, in response to the compromised state, proceed to step 1012 for further processing.”). As per claim 27, Grimm in combination with Gleichauf teaches the method of claim 26, Grimm further teaches automatically transmitting, by at least one computing device, a message to a compliance platform, wherein the message includes an indication that an attempt at remediation of the non- compliant status has been undertaken (Grimm, Paragraph 0170 recites “As shown in step 1014, the method 1000 may include remediating the endpoint using any suitable techniques. In one aspect, this may include notifying a network device for the enterprise network, such as a gateway, firewall, router, or threat management facility, of the compromised state in order to facilitate management of more general security measures across the enterprise network. This may also or instead include deploying local security measures to identify and address the source of a compromise, such as by scanning the endpoint for affected files. In another aspect, this may include receiving and installing/executing malware removal tools from a threat management facility, or otherwise taking steps alone or in cooperation with other security infrastructure for the enterprise network to remediate the endpoint.”). As per claim 28, Grimm in combination with Gleichauf teaches the method of claim 21, Grimm further teaches wherein the information from at least one other computing device is obtained on demand, scheduled periodically, or both obtained on demand and scheduled periodically (Grimm, Paragraph 0029 recites “The security management facility 122 may be operable to scan clients 144A-D on machines operating within the enterprise facility 102, or clients 144E-F otherwise managed by the threat management facility 100, for malicious code, to remove or quarantine certain applications and files, to prevent certain actions, to perform remedial actions, and to perform other security measures. In embodiments, scanning the clients 144A-D and/or 144E-F may include scanning some or all of the files stored thereon at any suitable time(s). For example, this may include scanning on a periodic basis, scanning an application when the application is executed, scanning files as the files are transmitted to or from one of the clients 144A-F, or the like. The scanning of the applications and files may be performed to detect known malicious code or known unwanted applications. In general, new malicious code and unwanted applications are continually developed and distributed, and the known code database for the security management facility 122 may be updated on a periodic basis, on an on-demand basis, on an alert basis, or the like.”). As per claim 29, Grimm in combination with Gleichauf teaches the method of claim 21, Grimm further teaches after evaluating the vendor being in compliance with the information and security criteria, automatically enabling, by at least one computing device, the vendor access to at least one data source (Grimm, Paragraph 0171 recites “After any remediation steps have been completed, the endpoint may resume communications with other endpoints (e.g., terminate self-isolation) and return to monitoring for compromised states in step 1008.”). As per claim 30, Grimm in combination with Gleichauf teaches the method of claim 21, Grimm further teaches wherein enabling the vendor access further comprises at least one of: automatically providing, by at least one computing device, at least one application programming interface (“API”) key; automatically resuming, by at least one computing device, a domain name server (“DNS”) registration; and automatically enabling, by at least one computing device, at least one computing device’s access to at least one public resource provided by the vendor (Grimm, Paragraph 0171 recites “After any remediation steps have been completed, the endpoint may resume communications with other endpoints (e.g., terminate self-isolation) and return to monitoring for compromised states in step 1008.”). Regarding claim 31, claim 31 is directed to a similar system associated with the method of claim 21 respectively. Claim 31 is similar in scope to claim 21, respectively, and are therefore rejected under similar rationale. Regarding claim 32, claim 32 is directed to a similar system associated with the method of claim 22 respectively. Claim 32 is similar in scope to claim 22, respectively, and are therefore rejected under similar rationale. Regarding claim 34, claim 34 is directed to a similar system associated with the method of claim 24 respectively. Claim 34 is similar in scope to claim 24, respectively, and are therefore rejected under similar rationale. Regarding claim 35, claim 35 is directed to a similar system associated with the method of claim 25 respectively. Claim 35 is similar in scope to claim 25, respectively, and are therefore rejected under similar rationale. Regarding claim 36, claim 36 is directed to a similar system associated with the method of claim 26 respectively. Claim 36 is similar in scope to claim 26, respectively, and are therefore rejected under similar rationale. Regarding claim 37, claim 37 is directed to a similar system associated with the method of claim 27 respectively. Claim 37 is similar in scope to claim 27, respectively, and are therefore rejected under similar rationale. Regarding claim 38, claim 38 is directed to a similar system associated with the method of claim 28 respectively. Claim 38 is similar in scope to claim 28, respectively, and are therefore rejected under similar rationale. Regarding claim 39, claim 39 is directed to a similar system associated with the method of claim 29 respectively. Claim 39 is similar in scope to claim 29, respectively, and are therefore rejected under similar rationale. Regarding claim 40, claim 40 is directed to a similar system associated with the method of claim 30 respectively. Claim 40 is similar in scope to claim 30, respectively, and are therefore rejected under similar rationale. Claim(s) 23 and 33 is/are rejected under 35 U.S.C. 103 as being unpatentable over Grimm et al. (US 2019/0312887) and Gleichauf et al. (US 9,436,820) and in further view of Ripolles Mateu et al. (US 2020/0110882). As per claim 23, Grimm in combination with Gleichauf teaches the method of claim 21, but fails to teach wherein the information representing the completion of the remedial action is included in a document, and wherein automatically determining the vendor being in compliance with the at least one respective aspect of information and security criteria further comprises: processing the document as a function of text processing or natural language processing. However, in an analogous art Ripolles Mateu teaches wherein the information representing the completion of the remedial action is included in a document, and wherein automatically determining the vendor being in compliance with the at least one respective aspect of information and security criteria further comprises: processing the document as a function of text processing or natural language processing (Ripolles Mateu, Paragraph 0015 recites “To facilitate distinguishing between topics which belong to the same or similar semantic fields, previously-known domain information is modeled with a bipartite graph. The bipartite graph created for the software security domain indicates a set of risks and a set of mitigation actions, both of which are obtained from security control documentation. A topic categorization system utilizes the bipartite graph to identify which risks and mitigation actions were discussed in a conversation by first using existing NLP techniques to extract relevant topics from conversation text and subsequently mapping the topics to the bipartite graph. A security assessment report identifying potential security threats and corresponding mitigation actions is generated based on the resulting mappings. Because the modeling of relationships between risks and mitigation actions in the bipartite graph creates risk-mitigation sets, the structure of the bipartite graph facilitates creation of a complete security assessment report.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Ripolles Mateu’s bipartite graph-based topic categorization system with Grimm’s secure endpoint in a heterogenous enterprise network because it offers the advantage of using NLP to help process document data. Regarding claim 33, claim 33 is directed to a similar system associated with the method of claim 23 respectively. Claim 33 is similar in scope to claim 23, respectively, and are therefore rejected under similar rationale. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to RODERICK TOLENTINO whose telephone number is (571)272-2661. The examiner can normally be reached Mon- Fri 8am-4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Luu Pham can be reached at 571-270-5002. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. RODERICK . TOLENTINO Examiner Art Unit 2439 /RODERICK TOLENTINO/Primary Examiner, Art Unit 2439
Read full office action

Prosecution Timeline

Jun 17, 2024
Application Filed
Sep 27, 2024
Response after Non-Final Action
Nov 19, 2025
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603907
SERVER AND METHOD FOR PROVIDING ONLINE THREAT DATA BASED ON USER-CUSTOMIZED KEYWORDS FOR PRIVATE CHANNEL
2y 5m to grant Granted Apr 14, 2026
Patent 12592915
INFERENCE-BASED SELECTIVE FLOW INSPECTION
2y 5m to grant Granted Mar 31, 2026
Patent 12580946
SYSTEMS AND METHODS FOR TRIGGERING TOKEN ALERTS
2y 5m to grant Granted Mar 17, 2026
Patent 12580948
CYBERSECURITY OPERATIONS MITIGATION MANAGEMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12572632
SYSTEMS AND METHODS FOR DATA SECURITY MODEL MODIFICATION AND ANOMALY DETECTION
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+35.4%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 705 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month