Prosecution Insights
Last updated: April 19, 2026
Application No. 18/818,019

REMOTE OPERATIONS FORENSICS

Non-Final OA §103
Filed
Aug 28, 2024
Examiner
PARK, SANGSEOK
Art Unit
2499
Tech Center
2400 — Computer Networks
Assignee
Sentinelone Inc.
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
202 granted / 241 resolved
+25.8% vs TC avg
Strong +17% interview lift
Without
With
+17.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
16 currently pending
Career history
257
Total Applications
across all art units

Statute-Specific Performance

§101
6.2%
-33.8% vs TC avg
§103
62.7%
+22.7% vs TC avg
§102
15.7%
-24.3% vs TC avg
§112
7.2%
-32.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 241 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) submitted on 12/31/2024 and 01/15/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 4-6, 10-11, 14-16 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Weingarten et al., US-20190052659-A1 (hereinafter “Weingarten ‘659”) in view of Babourine, US- 20230090132-A1 (hereinafter “Babourine ‘132”) and Kane et al., US-20150356313-A1 (hereinafter “Kane ‘313”). Per claim 1 (independent): Weingarten ‘659 discloses: A computer-implemented method for forensic artifact collection, the computer­implemented method comprising: collecting, by an agent installed on an endpoint, endpoint monitoring data; transmitting, by the agent over a network, the endpoint monitoring data to a cloud server; receiving, by the agent from the cloud server over the network, an indication of a security incident, wherein the security incident is determined based at least in part on the endpoint monitoring data (FIG. 1C, [0101], a network perimeter using an endpoint modeling and grouping management system (a cloud server) and agents (an agent installed on an endpoint) installed on endpoints ... the endpoint modeling and grouping management system 140 (the cloud server) can access and analyze the network traffic and/or activity data generated (collecting, by an agent installed on an endpoint, endpoint monitoring data; additionally, as cited in [0110], “an agent installed on one or more endpoint computer devices can be configured to collect data associated with and/or from the endpoints”) by the agents 145A, 145B, 145C, 145D, 145E, 145F, 1451, 1451 (collectively referred to herein as agents 145). In some embodiments, the endpoint modeling and grouping management system 140 can model and cluster groups such as logical groups of endpoints based on the network traffic and/or activity data (which is transmitted by the agent over a network to a cloud server) ... the endpoint modeling and grouping management system 140 can transmit the model (or the artificial intelligence software; an indication of a security incident) to end points and/or agents 145 (receiving, by the agent from the cloud server over the network) in a group such as logical group. The models can be used to assess activity on the endpoints in order to identify anomalies (forensic artifact collection) ... the endpoint modeling and grouping management system 140 can be a cloud-based system – that is, the endpoint modeling and grouping management system 140 is the cloud server; FIG. 1J, [0113], the endpoint modeling and grouping management system140 can generate and/or update the model (the security incident) for the network perimeter based on the newly collected data (the endpoint monitoring data) from the previously unmanaged devices 116 now having an agent 145K installed on the devices 116 – the security incident is determined based at least in part on the endpoint monitoring data); in response to receiving the indication of the security incident: identifying, by the agent, a set of forensic artifact types; determining, by the agent, a set of forensic artifacts, wherein each forensic artifact of the set of forensic artifacts is associated with at least one forensic artifact type of the set of forensic artifact types (FIG. 1C, [0102], the models can provide an indication of baseline activity. Agents and/or endpoints can receive the models (in response to receiving the indication of the security incident), identify baseline activity, and autonomously assess endpoint activity and/or network traffic for anomalies (identifying, by the agent, a set of forensic artifact types; determining, by the agent, a set of forensic artifacts; see [0110] for details of a set of forensic artifacts). Anomalies can be used to flag behavior that can indicate a security breach, such as malware or a computer virus. Thus, the agents can identify potential security breaches and react faster than traditional enterprise networks, where data can be transmitted to a cloud server for the cloud server to identify anomalies; FIG. 3, [0123], the endpoints are known to be associated with one or more particular characteristic (each forensic artifact of the set of forensic artifacts) indicative of a group (at least one forensic artifact type of the set of forensic artifact types) such as logical group. In some embodiments, the endpoint modeling and grouping management system can train the AI component to identify these characteristics to create groups based on new data collected from the agents – in other words, a particular characteristic collected at each respective endpoint is evaluated (among a set of forensic artifacts), and a corresponding group is assigned (among a set of forensic artifact types) based on that characteristic; [0124], the groups can be used to determine the type of protection required to protect from malware, exploits, file-less attacks, script based attacks, live attacks, and/or the like; [0110], (examples of a set of forensic artifacts) an agent installed on one or more endpoint computer devices can be configured to collect data ... intercepting and/or analyzing executed processes on the endpoint and/or monitoring network traffic to and from the endpoint ... monitor inbound and outbound operations and events at the kernel level ... analyze local, system, and/or operating system activities within the endpoints ... the network can be an elastic grid of endpoints that can identify abnormal behavior and/or provide access restrictions). Weingarten ‘659 does not disclose but Babourine ‘132 discloses: transmitting, by the agent, the set of forensic artifacts to a destination server, wherein the set of forensic artifacts is stored in random access memory, wherein identifying the set of forensic artifact types, determining the set of forensic artifacts, and transmitting the set of forensic artifacts to the destination server are performed (FIG. 3, [0037], the anomalous state detector 320 (the agent) may send the anomalous data to a security server 322 (transmitting, by the agent, the set of forensic artifacts to a destination server) ... The security server 322 (the destination server) may then take appropriate remediation actions; [0035], The anomalous state detector 320 may be triggered when an in-memory state is updated ... analyzes one or more instances of the in-memory stored state 318 (identifying the set of forensic artifact types) to determine that the in-memory stored state 318 has become anomalous due to one or more anomalous API calls; as set forth in [0035]-[0036], the anomalous state detector 320 identifies the set of forensic artifact types based on the in-memory stored state 318, such as “a value exceeding a predetermined threshold,” “changed from a permissible to an impermissible value,” or “information about the client”; the anomalous state detector 320 then determines corresponding sets of forensic artifacts, including, for example, “failed login attempts,” “a first user-agent string is updated to a second user-agent string,” or “localization of the client”; because anomalous state detector 320 can be regarded as a computing device, it is reasonable to interpret that the detector 320 executes on system memory (e.g., RAM) and stores data therein during operation – that is, the set of forensic artifacts is stored in random access memory). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified Weingarten ‘659 with the determination of whether the data is anomalous based on the in-memory stored state for the security server to take appropriate remediation actions as taught by Babourine ‘132 because the system can effectively protect against various types of potential security threats. Additionally, Babourine ‘132 is analogous to the claimed invention because it teaches that real-time data streams can be monitored both in real time and at a later date [0029]. Weingarten ‘659 in view of Babourine ‘132 does not disclose but Kane ‘313 discloses: transmitting the data to the destination server are performed without writing to a non­volatile storage medium of the endpoint ([0010], A system is provided for managing data on a server (the destination server), the server having server processor and a non-volatile storage associated therewith. A plurality of clients (the endpoint) are in communication with the server over a network, each of the clients having a device processor and a volatile and non-volatile storage associated therewith. The server processor is adapted to: receive data from a respective client in the network (transmitting the data to the destination server are performed) ... provide access to the data by authorized ones of the plurality of clients. The data accessed by the clients remains volatile on the client and is prevented from being written to non-volatile storage associated with the client – without writing to a non­volatile storage medium of the endpoint). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified Weingarten ‘659 in view of Babourine ‘132 with the transmission of client’s data to a server providing access to the data by authorized ones while being prevented from being written to non-volatile storage of the client as taught by Kane ‘313 because the system would optimize processes of acquiring, storing and disseminating data for speed, integrity and security [ABSTRACT]. Additionally, Kane ‘313 is analogous to the claimed invention because it teaches that the data received by the first device from the server can be viewed on the device display but cannot be stored in any non-volatile storage medium in the first device [0007]. Per claim 4 (dependent on claim 1): Weingarten ‘659 in view of Babourine ‘132 and Kane ‘313 discloses the elements detailed in the rejection of claim 1 above, incorporated herein by reference. Weingarten ‘659 discloses: The computer-implemented method of claim 1, wherein the set of forensic artifacts is determined based at least in part on the endpoint monitoring data, wherein the endpoint monitoring data indicates at least one of a file operation or a network operation (FIG. 1H, [0110], an agent installed on one or more endpoint computer devices can be configured to collect data (based at least in part on the endpoint monitoring data) ... intercepting and/or analyzing executed processes on the endpoint and/or monitoring network traffic to and from the endpoint (a file operation or a network operation) ... monitor inbound and outbound operations and events at the kernel level ... analyze local, system, and/or operating system activities within the endpoints – the set of forensic artifacts ... the network can be an elastic grid of endpoints that can identify abnormal behavior and/or provide access restrictions). Per claim 5 (dependent on claim 1): Weingarten ‘659 in view of Babourine ‘132 and Kane ‘313 discloses the elements detailed in the rejection of claim 1 above, incorporated herein by reference. Weingarten ‘659 discloses: The computer-implemented method of claim 1, wherein the indication of the security incident comprises an indication of a type of the security incident, wherein determining the set of forensic artifacts is based at least in part on the type of the security incident (FIG. 1C, [0102], the models can provide an indication of baseline activity. Agents and/or endpoints can receive the models (the indication of the security incident), identify baseline activity, and autonomously assess endpoint activity and/or network traffic for anomalies (determining the set of forensic artifacts). Anomalies can be used to flag behavior that can indicate a security breach, such as malware or a computer virus. Thus, the agents can identify potential security breaches and react faster than traditional enterprise networks, where data can be transmitted to a cloud server for the cloud server to identify anomalies; FIG. 3, [0123], the endpoint modeling and grouping management system can train the AI component to identify these characteristics to create groups based on new data collected from the agents; [0124], the groups can be used to determine the type of protection required to protect from malware, exploits, file-less attacks, script based attacks, live attacks, and/or the like – based at least in part on the type of the security incident, that is, it can be understood that the received models take into account a type of the security incident). Per claim 6 (dependent on claim 5): Weingarten ‘659 in view of Babourine ‘132 and Kane ‘313 discloses the elements detailed in the rejection of claim 5 above, incorporated herein by reference. Weingarten ‘659 discloses: The computer-implemented method of claim 5, wherein to determine the set of forensic artifacts, the computer-implemented method further comprises: determining a pre-defined rule based on the type of the security incident, wherein the pre-defined rule specifies at least one of: a type of forensic artifact to collect, a log to collect, or a file to collect (FIG. 1C, [0102], the models can provide an indication of baseline activity. Agents and/or endpoints can receive the models, identify baseline activity, and autonomously assess endpoint activity and/or network traffic for anomalies (determining the set of forensic artifacts); FIG. 3, [0123], the endpoint modeling and grouping management system can train the AI component to identify these characteristics to create groups based on new data collected from the agents; [0124], the groups can be used to determine the type of protection required to protect from malware, exploits, file-less attacks, script based attacks, live attacks, and/or the like; [0069], the system is configured to create, define, and/or dynamically alter a set of rules that determine which endpoint activities are identified as anomalous (determining a pre-defined rule based on the type of the security incident); FIG. 1H, [0110], an agent installed on one or more endpoint computer devices can be configured to collect data ... (specifies a type of forensic artifact to collect, a log to collect, or a file to collect) intercepting and/or analyzing executed processes on the endpoint and/or monitoring network traffic to and from the endpoint ... monitor inbound and outbound operations and events at the kernel level ... analyze local, system, and/or operating system activities within the endpoints ... the network can be an elastic grid of endpoints that can identify abnormal behavior and/or provide access restrictions). Per claim 10 (dependent on claim 1): Weingarten ‘659 in view of Babourine ‘132 and Kane ‘313 discloses the elements detailed in the rejection of claim 1 above, incorporated herein by reference. Weingarten ‘659 discloses: The computer-implemented method of claim 1, wherein the set of forensic artifacts comprises forensic artifacts created from an initial time to a current time, wherein the initial time is correlated with a time when the agent began operating on the endpoint ([0110], an agent installed on one or more endpoint computer devices can be configured to collect data ... intercepting and/or analyzing executed processes on the endpoint and/or monitoring network traffic to and from the endpoint (the set of forensic artifacts) ... monitor inbound and outbound operations and events at the kernel level (the set of forensic artifacts) ... analyze local, system, and/or operating system activities within the endpoints (the set of forensic artifacts). The collection of data via an agent (the agent operating on the endpoint) can be performed on a periodic basis (for example, each micro­second, each millisecond, each second, each minute, each hour, each day, each month, each year, and/or the like) – created from an initial time to a current time, wherein the initial time is correlated with a time when the agent began operating on the endpoint). Per claim 11 (independent): The limitations of the claim(s) correspond(s) to features of claim 1 and the claim(s) is/are rejected for the reasons detailed with respect to claim 1. Per claim 14 (dependent on claim 11): Weingarten ‘659 in view of Babourine ‘132 and Kane ‘313 discloses the elements detailed in the rejection of claim 11 above, incorporated herein by reference. The limitations of the claim(s) correspond(s) to features of claim 4 and the claim(s) is/are rejected for the reasons detailed with respect to claim 4. Per claim 15 (dependent on claim 11): Weingarten ‘659 in view of Babourine ‘132 and Kane ‘313 discloses the elements detailed in the rejection of claim 11 above, incorporated herein by reference. The limitations of the claim(s) correspond(s) to features of claim 5 and the claim(s) is/are rejected for the reasons detailed with respect to claim 5. Per claim 16 (dependent on claim 15): Weingarten ‘659 in view of Babourine ‘132 and Kane ‘313 discloses the elements detailed in the rejection of claim 15 above, incorporated herein by reference. The limitations of the claim(s) correspond(s) to features of claim 6 and the claim(s) is/are rejected for the reasons detailed with respect to claim 6. Per claim 20 (dependent on claim 11): Weingarten ‘659 in view of Babourine ‘132 and Kane ‘313 discloses the elements detailed in the rejection of claim 11 above, incorporated herein by reference. The limitations of the claim(s) correspond(s) to features of claim 10 and the claim(s) is/are rejected for the reasons detailed with respect to claim 10. Claim(s) 2 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Weingarten ‘659 in view of Babourine ‘132 and Kane ‘313 as applied to claims 1 and 11 above, and further in view of SINGH, US-20200082188-A1 (hereinafter “SINGH ‘188”) and Revah et al., US-20060253624-A1 (hereinafter “Revah ‘624”). Per claim 2 (dependent on claim 1): Weingarten ‘659 in view of Babourine ‘132 and Kane ‘313 discloses the elements detailed in the rejection of claim 1 above, incorporated herein by reference. Weingarten ‘659 in view of Babourine ‘132 and Kane ‘313 does not disclose but SINGH ‘188 discloses: The computer-implemented method of claim 1, wherein transmitting the set of forensic artifacts comprises: generating a plurality of batches, wherein each batch of the plurality of batches comprises a subset of the set of forensic artifacts; and transmitting each batch of the plurality of batches to the destination server. (FIG. 7, [0099], a method for ride monitoring ... The security device 106 collects the data (the media, the vehicle data, and the additional information) from the sensors (the set of forensic artifacts to be sent to the server 108; according to [0034], data utilized for security surveillance are collected, including, for example, “speed of the vehicle,” “location of the vehicle,” “time, date, timestamps”) and checks (at step 6); [0100], The security device 106 further transfers (at step 16) the created set of data (generating a plurality of batches) to the server 108 (transmitting the set of forensic artifacts to the destination server) ... On detecting a failure of transfer of the data to the server, the security device 106 uses (at step 20) a retry loop of configurable value to try and re-send the set of data (using a packet connection) multiple times until a pre-configured maximum number is reached; as illustrated in FIG. 7, the process of generating and transmitting a set of data is performed in a loop; it can be understood that, during each iteration of the loop, each batch of the plurality of batches is transmitted to the destination server). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified Weingarten ‘659 in view of Babourine ‘132 and Kane ‘313 with the transmission of each data set generated during a repeated transmission loop, based on data obtained from the sensors, to the server as taught by SINGH ‘188 because sensitive data can be efficiently transmitted over a network to a server without loss. Additionally, SINGH ‘188 is analogous to the claimed invention because it teaches the security surveillance system 100 includes external entity(ies) 104, user device(s) 102, a security device 106, a server 108, and a storage device 110 [0030]. Weingarten ‘659 in view of Babourine ‘132 and Kane ‘313 and SINGH ‘188 does not disclose but Revah ‘624 discloses: wherein each batch of the plurality of batches has a batch size that is less than or equal to a maximum batch size (FIG. 1, [0024], managing a data storage backup or mirroring system. Various network elements, such as cache controllers, may associate and/or send each transaction they complete to an open batch of transactions (each batch of the plurality of batches), which batch of transactions may be transmitted to one or more remote mirror servers; [0047], a system controller or management module may monitor the size of any open batch (step 2100), and upon determining that an open batch is approaching a completion criteria (e.g. the batch size is at or above 80% of it's maximum size) – make sure to has a batch size that is less than or equal to a maximum batch size , the controller may transmit a first synchronization signal to all the system elements contributing data to the batch (e.g. cache controllers), as depicted in step 2200; [0049], Once the newly opened batch approaches its completion criteria, the system controller may repeat). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified Weingarten ‘659 in view of Babourine ‘132 and Kane ‘313 and SINGH ‘188 with the transmission of transaction data from each respective batch to remote mirror servers only upon satisfying the completion criteria as taught by Revah ‘624 because the system would prevent inconsistent situations and latency of synchronous mirroring and solve order-preserving in asynchronous systems in case of backing up massive data [0008]-[0017]. Additionally, Revah ‘624 is analogous to the claimed invention because it teaches preparing a mirror batch within a data processing system [ABSTRACT]. Per claim 12 (dependent on claim 11): Weingarten ‘659 in view of Babourine ‘132 and Kane ‘313 discloses the elements detailed in the rejection of claim 11 above, incorporated herein by reference. The limitations of the claim(s) correspond(s) to features of claim 2 and the claim(s) is/are rejected for the reasons detailed with respect to claim 2. Claim(s) 8-9 and 18-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Weingarten ‘659 in view of Babourine ‘132 and Kane ‘313 and Schroeder, US-20180331886-A1 (hereinafter “Schroeder ‘886”). Per claim 8 (dependent on claim 1): Weingarten ‘659 in view of Babourine ‘132 and Kane ‘313 discloses the elements detailed in the rejection of claim 1 above, incorporated herein by reference. Weingarten ‘659 in view of Babourine ‘132 and Kane ‘313 does not disclose but Schroeder ‘886 discloses: The computer-implemented method of claim 1, further comprising receiving an indication of a network address of the destination server (FIG. 2, [0063], perform the tasks of a user agent 230, such as verifying the browser has not been tampered with, verifying the health of the client device 102, managing a secure connection with the trust broker system 130 and/or the server system 140 (the destination server); [0065], a user verification module 232 for verifying the identity of the user of the client system 102 by, for example, requesting a password and/or user identification data; [0067], server agent connection data 236, including data necessary to connect to a server agent (FIG. 1, 150) needed to obtain requested data or services, such as the network address for the server agent (FIG. 1, 150) – receiving a network address of the destination server; FIG. 1, [0053], the server agent 150-2 is an application running on the server system 140-2; [0055], one or more server systems 140 store data (for example, the work product of attorneys) and provide services (for example an email service or a document backup service) that are accessible over a network). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified Weingarten ‘659 in view of Babourine ‘132 and Kane ‘313 with the access to a server providing services after verifying the identity of the user of the client through a password and obtaining a network address of the server as taught by Schroeder ‘886 because the system would improve network communications performed over the Internet or any other computer network [0024]. Additionally, Schroeder ‘886 is analogous to the claimed invention because it teaches the client-server environment 100 includes a client system 102 and a remote system for securing organizational assets and communications over a network [0026]. Per claim 9 (dependent on claim 8): Weingarten ‘659 in view of Babourine ‘132 and Kane ‘313 and Schroeder ‘886 discloses the elements detailed in the rejection of claim 8 above, incorporated herein by reference. Weingarten ‘659 in view of Babourine ‘132 and Kane ‘313 does not disclose but Schroeder ‘886 discloses: The computer-implemented method of claim 8, further comprising receiving a credential for accessing the destination server (FIG. 2, [0063], perform the tasks of a user agent 230, such as verifying the browser has not been tampered with, verifying the health of the client device 102, managing a secure connection with the trust broker system 130 and/or the server system 140 (the destination server); [0065], a user verification module 232 for verifying the identity of the user of the client system 102 by, for example, requesting a password and/or user identification data – receiving a credential for accessing the destination server; [0067], server agent connection data 236, including data necessary to connect to a server agent (FIG. 1, 150) needed to obtain requested data or services, such as the network address for the server agent (FIG. 1, 150); FIG. 1, [0053], the server agent 150-2 is an application running on the server system 140-2; [0055], one or more server systems 140 store data (for example, the work product of attorneys) and provide services (for example an email service or a document backup service) that are accessible over a network). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified Weingarten ‘659 in view of Babourine ‘132 and Kane ‘313 with the access to a server providing services after verifying the identity of the user of the client through a password and obtaining a network address of the server as taught by Schroeder ‘886 because the system would improve network communications performed over the Internet or any other computer network [0024]. Per claim 18 (dependent on claim 11): Weingarten ‘659 in view of Babourine ‘132 and Kane ‘313 discloses the elements detailed in the rejection of claim 11 above, incorporated herein by reference. The limitations of the claim(s) correspond(s) to features of claim 8 and the claim(s) is/are rejected for the reasons detailed with respect to claim 8. Per claim 19 (dependent on claim 18): Weingarten ‘659 in view of Babourine ‘132 and Kane ‘313 and Schroeder ‘886 discloses the elements detailed in the rejection of claim 18 above, incorporated herein by reference. The limitations of the claim(s) correspond(s) to features of claim 9 and the claim(s) is/are rejected for the reasons detailed with respect to claim 9. Allowable Subject Matter Claim(s) 3, 7, 13 and 17 is/are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SANGSEOK PARK whose telephone number is (571)272-4332. The examiner can normally be reached Monday-Friday 7:30-5:30 and Alternate Fridays 9:00 am-5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, PHILIP CHEA can be reached at (571)272-3951. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SANGSEOK PARK/Primary Examiner, Art Unit 2499
Read full office action

Prosecution Timeline

Aug 28, 2024
Application Filed
Feb 20, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603019
SENSOR DEVICE AND ENCRYPTION METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12602492
MEMORY SYSTEM AND CONTROL METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12596809
METHOD FOR DETECTING VULNERABILITIES OF TARGET APPLICATIONS, DEVICE, AND MEDIUM THEREOF
2y 5m to grant Granted Apr 07, 2026
Patent 12596849
MANAGING TRUSTED PLATFORM MODULE (TPM) REPLACEMENT AT AN INFORMATION HANDLING SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12585795
PROTECTION OF DATA BASED ON STANDARDS OF SECURITY PROTECTION
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+17.1%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 241 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month