Prosecution Insights
Last updated: April 19, 2026
Application No. 18/112,996

Methods and Systems for Protecting and Restoring Virtual Machines

Non-Final OA §103§DP
Filed
Feb 22, 2023
Examiner
CHEN, ZHI
Art Unit
2196
Tech Center
2100 — Computer Architecture & Software
Assignee
Netapp Inc.
OA Round
2 (Non-Final)
61%
Grant Probability
Moderate
2-3
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 61% of resolved cases
61%
Career Allow Rate
152 granted / 250 resolved
+5.8% vs TC avg
Strong +40% interview lift
Without
With
+40.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
27 currently pending
Career history
277
Total Applications
across all art units

Statute-Specific Performance

§101
12.7%
-27.3% vs TC avg
§103
49.1%
+9.1% vs TC avg
§102
6.9%
-33.1% vs TC avg
§112
25.2%
-14.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 250 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to Applicant’s Amendment filed on 10/28/2025. Claims 1-20 are presented for examination. Claims 1-3, 7-10, 14-17 and 20 have been amended. Applicant’s amendments to claims have overcome 112 rejections set forth in the non-Final Office Action mailed 7/28/2025. In addition, Applicant’s Terminal Disclaimer submitted by 10/27/2025 has overcome double patenting rejections set forth in the non-Final Office Action mailed 7/28/2025. Examiner Notes Examiner cites particular columns, paragraphs, figures and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirely as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Information Disclosure Statement The information disclosure statement (IDS) submitted on 9/22/2025. The submissions are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Piduri et al. (US 20170364285 A1-recorded at IDS submitted by 7/18/2024) in view of Kumar (US 20220164211 A1), Wang et al. (US 8566542 B1, hereafter Wang) and Pershin et al. (US 20150309890 A1, hereafter Pershin). Regarding to claim 1, Piduri discloses: a method executed by one or more processors (see claim 8; “wherein execution of the program instructions by one or more processors of a computer system causes the one or more processors to perform steps comprising”), comprising: a virtual machine (VM) from among a plurality of VMs for a restore operation to restore the VM from a storage snapshot (see [0022]; “such as the VMs 108 running on the hosts 110 in the distributed system 100 … A snapshot is a copy of a VM's disk file at a given point in time. Snapshots provide a change log for the virtual disk and are used to restore a VM to a particular point in time”), the plurality of VMs sharing a logical data store having a plurality of virtual volumes used for storing data for the VMs by a storage system (see Fig. 2 and [0023]; “two VMs, VM1 and VM2 … VM1 has three virtual volumes, VVOL1, VVOL2 and VVOL3. VVOL1 is used for configuration data of VM1, while VVOL2 and VVOL3 are used for virtual disks of VM1. VM2 has four virtual volumes, VVOL4, VVOL5, VVOL6 and VVOL7. VVOL4 is used for configuration data of VM2, VVOL5 and VVOL6 are used for virtual disks of VM2, and VVOL7 is used for a snapshot of VM2. The virtual volumes of VM1 and VM2 are stored in the storage system 105A”. Also see Fig. 1, [0013], [0017] for similar descriptions like “The storage devices 136 are used to support logical storage units, such as datastores and virtual volumes (VVOLs). VVOLs are part of a provisioning feature for VMware vSphere® product that changes how VMs are stored and managed”) registered with the first plugin and a virtual appliance of a VM management system (see Fig. 1, [0018]-[0019]; “the storage management server 138 creates and manages virtual volumes, which are mapped to physical storage locations in the storage devices 136”, “the storage management server 138 includes a storage interface manager 140 … the storage interface manager 140 can initiate the creation of logical storage units, such as virtual volumes, in response to requests from the external components”. The storage interface manager 140 can be considered as the combination of claimed first plugin and claimed virtual appliance; the storage components/resources within the storage system 105A are managed by the storage interface manager 140, and thus it is reasonable to conclude the storage system 105A registered with the storage interface manager 140), the storage system using a first set of storage volumes to store data for a set of virtual volumes of the VM (see Fig. 2 and [0023]; “two VMs, VM1 and VM2 … VM2 has four virtual volumes, VVOL4, VVOL5, VVOL6 and VVOL7. VVOL4 is used for configuration data of VM2, VVOL5 and VVOL6 are used for virtual disks of VM2, and VVOL7 is used for a snapshot of VM2”); a logical storage object associated with the storage snapshot (see [0022]; “The replica virtual volumes may be created using one or more storage devices”. Also see [0024]-[0028]; “When the virtual volumes of VM1 and VM2 are replicated for failure protection, the virtual volumes that belong to the same storage replication consistency group are replicated together to ensure write order fidelity”. The creation of replica virtual volumes of VM2, such as VVOL4-R associated with VM4 is required to be created using one or more storage devices, i.e., an associated logical storage object. Note: since such one or more storage devices is used to recover the VM2, and thus it is a logical storage object associated with the storage snapshot of VM2); calling, by the first plugin, the virtual appliance to import the [renamed] logical storage object as a virtual volume; utilizing, by the virtual appliance, to import the virtual volume as a virtual disk (see [0019]; “external components, such as the hosts 110 and virtualization managers 126 at the primary site 102A … the storage interface manager 140 can initiate the creation of logical storage units, such as virtual volumes, in response to requests from the external components”. The requests can be considered as claimed calling. Also see Fig. 2, [0022], [0024]-[0028]; “The replica virtual volumes may be created using one or more storage devices” and “When the virtual volumes of VM1 and VM2 are replicated for failure protection, the virtual volumes that belong to the same storage replication consistency group are replicated together to ensure write order fidelity”. Again, the replica virtual volumes of VM2 like VVOL4-R is created from storage device by the storage interface manager 140, and thus the related storage device is imported as virtual volume and then imported as virtual disk to be attached to VM4 to result the recovered VM2 as shown by Fig. 2); attaching, by the first plugin, the virtual disk to a recovered VM for the restore operation (see [0022], [0024]-[0028]; “VM2 in the data center 104A can be restarted as VM4 in the data center 104B using VVOL4, VVOL5, VVOL6 and VVOL7 in the storage system 105B”. In order to allow the VM4 using replica virtual volumes, it is required to attach the associated virtual disks of the replica virtual volumes are attached to the VM to be used to recover the original VM). Piduri does not disclose: generating, by a first plugin, a directory for the virtual machine (VM); renaming, by the first plugin, a logical storage object associated with the storage snapshot; an application programming interface (API) is utilized to import the virtual volume as a virtual disk, the API creating a virtual disk descriptor file that is stored within the directory; and the recovered VM is a VM snapshot generated from the VM for the restore operation. However, Kumar discloses: generating a directory for a virtual machine (VM) from among a plurality of VMs for a restore operation to restore the VM (see [0059]; “creating a new vmdk path file which references that vVol in the virtual volume datastore 212”. It is understood that the vmdk path file described from [0059] at filesystem (see “application instance information and high level layout (e.g., file and filesystem information)” from [0053]) is required to be stored/located at certain directory/folder. In this way, it is required to generate certain directory/folder. Also see [0045] and [0048]-[0049]; “a greater number of protection copies are available resulting in better protection service level agreements (SLAs) and better recovery point objectives (RPOs)”, “generate application instance data snapshots 406 in accordance with storage system 210 as will be further explained. It is to be appreciated that each snapshot is a copy of data of the application instance which is stored as part of a vVol in virtual volume datastore 212 during execution of the application instance on a given VM 202” and “recovering the application”. At reasonable embodiment, the VMDK path file created at [0059] is for VM or VM application recovery operation); utilizing an application programming interface (API) to import the virtual volume as a virtual disk, the API creating a virtual disk descriptor file that is stored within the directory; attaching the virtual disk to a VM (see [0059]; “(i) storing and importing the unmanaged vVol information into the vStorage APIs for Storage Awareness (VASA) repository of the vSphere® hypervisor platform; (ii) binding the imported vVols snapshot to the appropriate protocol endpoint; and (iii) creating a new vmdk path file which references that vVol in the virtual volume datastore 212. After the snapshot is converted to managed, the snapshot can be attached to any VM as a regular vVol (e.g., part of virtual volume datastore 212) and data inside the snapshot can be mounted back to (made accessible to) the VM where it is attached”). It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the processes of attaching replica virtual volumes to the recovered VM from Piduri by including creating vmdk to allow attaching the virtual volume to VM from Kumar, since it would provide a mechanism of ensuring the attached virtual volumes or virtual disks are manageable (see [0059] from Kumar). In addition, Wang discloses: attaching the virtual disk to a VM snapshot generated from the VM for the restore operation (see Figs. 1, 3, lines 50-59 of col. 4; “mount the virtual disks of VMs 102 in snapshot LUN 122 (or copy 122B) to the corresponding backup shadow VMs 116”. Also see lines 45-55 of col. 2; “Each backup shadow VM 116 is a dummy VM with a basic configuration, or the same configuration as the corresponding VM 102, that allows it to mount to a virtual disk residing on a Fibre Channel or iSCSI SAN and implement the hypervisor's off-host backup mode of operation”. The backup shadow VM 116 is reasonable to be considered as VM snapshot generated from the original VM). It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the generation of VM4 for VM recovery operation from the combination of Piduri and Kumar by including generation of backup shadow VM for VM recovery operation from Wang, since would ensuring the restored VM has same configuration as the original VM (see lines 45-55 of col. 2 from Wang). In addition, Pershin discloses: renaming a logical storage object associated with the storage snapshot (see [0038]; “the modification includes duplicating a logical storage device identifier and labeling each with corresponding source and peer/target attributes, respectively. For example, an initial device discovery may result in SSE 176 determining that storage A 120 includes a storage device identified by a device identifier. SSE duplicates the device identifier to serve as both the underlying storage device in storage A 120 and as representation of a peer/target storage device (e.g., as would be expected in stretched storage to be discovered in storage B 155 of datacenter B 140)”. The label with source and target attribute in addition to duplicating the logical storage device identifier can be considered as a rename process for the target storage device to result a remanded logical storage object). It would have been obvious to one with ordinary skill, in the art before the effective filing date of the claim invention, to modify the generation of replica virtual volumes from the combination of Piduri, Kumar and Wang by including the process of labeling with source and target attributes in addition to duplicate logical storage device identifier from Pershin, and thus the combination of Piduri, Kumar, Wang and Pershin would disclose the missing limitations from Piduri, since it would provide a naming mechanism to easily identify a pair of source and target logical storage device by unique identifier (see [0038] from Pershin). Regarding to Claim 2, the rejection of Claim 1 is incorporated and further the combination of Piduri, Kumar and Wang discloses: restoring, by the first plugin, the VM snapshot to generate a restored VM (see Figs. 1, 3, lines 45-55 of col. 2 and lines 50-59 of col. 4 from Wang; “Each backup shadow VM 116 is a dummy VM with a basic configuration, or the same configuration as the corresponding VM 102, that allows it to mount to a virtual disk residing on a Fibre Channel or iSCSI SAN and implement the hypervisor's off-host backup mode of operation” and “mount the virtual disks of VMs 102 in snapshot LUN 122 (or copy 122B) to the corresponding backup shadow VMs 116”); and providing virtual volume metadata to the virtual appliance for the restored VM (see [0022] and [0030] from Piduri; “a storage replication consistency group with a unique replication group identifier is created under the control of the storage interface manager … a new virtual volume with a unique virtual volume identifier is created under the control of the storage interface manager and assigned to the storage replication consistency group”. As explained at the rejection of claim 1 above, in order to create/replicate to result replica virtual volume of VM2, such as VVOL4-R, a new virtual volume is required to be created, and thus the related unique replication group identifier, i.e., claimed virtual volume metadata for the restored VM, is required to be provided). Regarding to Claim 3, the rejection of Claim 1 is incorporated and further the combination of Piduri, Kumar and Wang discloses: to take the storage snapshot, discovering, by the first plugin, from the VM management system the plurality of VMs (see Fig. 2, [0017] and [0022]-[0024] from Piduri; “The use of VVOLs provides the ability to snapshot a single VM” and “VVOL7 is used for a snapshot of VM2” and “the virtual volumes that belong to the same storage replication consistency group are replicated together to ensure write order fidelity”. In order to achieve the features of “snapshot a single VM” and “the virtual volumes that belong to the same storage replication consistency group are replicated together”, the storage interface manager 140 is required to discover the plurality of VMs including the VM to be recovered); and obtaining, by the first plugin, from the virtual appliance, metadata and storage layout of the set of virtual volumes used by the VM to store data (see [0019] from Piduri; “a storage interface manager 140, which can provide a storage awareness service for external components, such as the hosts 110 and virtualization managers 126 at the primary site 102A, so that these external components can obtain information about available storage topology, capabilities, and status of the storage system 105A”). Regarding to Claim 4, the rejection of Claim 3 is incorporated and further the combination of Piduri, Kumar and Wang discloses: discovering, by the first plugin, all storage volumes within the logical data store (see [0018]-[0019] from Piduri; “a storage interface manager 140, which can provide a storage awareness service for external components, such as the hosts 110 and virtualization managers 126 at the primary site 102A, so that these external components can obtain information about available storage topology, capabilities, and status of the storage system 105A. In addition, the storage interface manager 140 can initiate the creation of logical storage units, such as virtual volumes”. The storage interface manager 140 is required to discover all storage volumes within the storage system 105A in order to achieve the functionalities described by [0019]). Regarding to Claim 5, the rejection of Claim 4 is incorporated and further the combination of Piduri, Kumar and Wang discloses: identifying, by the first plugin, the first set of storage volumes from the discovered storage volumes (see [0019] and [0023]-[0024] from Piduri; “the storage interface manager 140 manages storage replication consistency groups for the logical storage units. In particular, the storage interface manager 140 can create and delete storage replication consistency groups, and provide any relationships between virtual volumes and storage replication consistency groups” and “the virtual volumes that belong to the same storage replication consistency group are replicated together to ensure write order fidelity … VVOL4, VVOL5, VVOL6 and VVOL7, which belong to the same storage replication consistency group, i.e., SRCG2, are replicated together”. In order to replicate virtual volumes belong to same storage replication consistency group, the storage interface manager 140 is required to identify the storage volume associated with same storage replication consistency group). Regarding to Claim 6, the rejection of Claim 1 is incorporated and further the combination of Piduri, Kumar and Wang discloses: provisioning, by the virtual appliance, the set of virtual volumes for the VM based on a service level defined by a storage profile (see [0029] from Piduri; “a request to create a new virtual volume for a VM is transmitted from the host to the storage interface manager. The request includes at least an indication that the new virtual volume is to be assigned to a new replication group and may further include storage requirements of the new virtual volume. … The storage requirements may include particular storage capabilities required for the virtual volume, which may be represented by different storage tiers, e.g., gold, silver or bronze. As an example, the storage requirements may include type of drive, rpm (rotations per minute) of the drive and RAID (redundant array of independent disks) configuration”). Regarding to Claim 7, the rejection of Claim 1 is incorporated and further the combination of Piduri, Kumar and Wang discloses: wherein the first plugin obtains VM files from the VM management system (see [0021]-[0022] from Piduri; “The storage interface manager 140 of the storage system 105B can provide a storage awareness service for external components, such as the hosts 110 and virtualization managers 126 at the secondary site 102B, so that these external components can obtain information about available storage topology, capabilities, and status of the storage system 105” and “These storage objects for VMs include virtual disks and other data objects to support the operation of the VMs, such as configuration files and snapshots”. Obtaining some configuration or VM files, like files containing available storage topology, capabilities, and status of the storage system 105 from the storage interface manager 140) and uses a second API to obtain the first set of storage volumes from a storage operating system of the storage system (see [0029] from Piduri; “a communication connection is established between the host and the storage interface manager. The communication connection may be established using an application programming interface (API) of the storage interface manager 140”. The communication between the host and the storage interface is created via an API, and thus such API is also used to obtain sets of storage volumes from the storage interface manager 140, i.e., claimed storage operating system of the storage system). Regarding to Claim 8, Claim 8 is a product claim corresponds to method Claim 1 and is rejected for the same reason set forth in the rejection of Claim 1 above (note: Piduri also teaches claimed “a non-transitory machine-readable storage medium”, see [0040] and claim 8 from Piduri). Regarding to Claim 9, Claim 9 is a product claim corresponds to method Claim 2 and is rejected for the same reason set forth in the rejection of Claim 2 above. Regarding to Claim 10, Claim 10 is a product claim corresponds to method Claim 3 and is rejected for the same reason set forth in the rejection of Claim 3 above. Regarding to Claim 11, Claim 11 is a product claim corresponds to method Claim 4 and is rejected for the same reason set forth in the rejection of Claim 4 above. Regarding to Claim 12, Claim 12 is a product claim corresponds to method Claim 5 and is rejected for the same reason set forth in the rejection of Claim 5 above. Regarding to Claim 13, Claim 13 is a product claim corresponds to method Claim 6 and is rejected for the same reason set forth in the rejection of Claim 6 above. Regarding to Claim 14, Claim 14 is a product claim corresponds to method Claim 7 and is rejected for the same reason set forth in the rejection of Claim 7 above. Regarding to Claim 15, Claim 15 is a system claim corresponds to method Claim 1 and is rejected for the same reason set forth in the rejection of Claim 1 above (note: Piduri also teaches claimed “a system comprising: a memory containing … a processor coupled to the memory to execute the machine executable code to”, see [0012] from Piduri). Regarding to Claim 16, Claim 16 is a system claim corresponds to method Claim 2 and is rejected for the same reason set forth in the rejection of Claim 2 above. Regarding to Claim 17, Claim 17 is a system claim corresponds to method Claim 3 and is rejected for the same reason set forth in the rejection of Claim 3 above. Regarding to Claim 18, Claim 18 is a system claim corresponds to method Claims 4 and 5 and is rejected for the same reason set forth in the rejections of Claims 4 and 5 above. Regarding to Claim 19, Claim 19 is a system claim corresponds to method Claim 6 and is rejected for the same reason set forth in the rejection of Claim 6 above. Regarding to Claim 20, Claim 20 is a system claim corresponds to method Claim 7 and is rejected for the same reason set forth in the rejection of Claim 7 above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Yang et al. (US 20200133502 A1) discloses: storing the change tracking information in a .ctk file associated with vmdk and a snapshot file in each virtual machine folder (see [0071]). Ciano et al. (US 8055737 B2) discloses: identify and scan all the .vmdk files contained in each of the above-mentioned folders (see lines 34-37 of col. 3) Rangachari et al. (US 8473777 B1) discloses: the LUN clone and the active LUN for the snapshot are renamed (see lines 41-45 of col. 13). Thakkar et al. (US 20160105488 A1) discloses: The same object identifiers assigned to a VM might not be usable when that VM is migrated from private data center 202 to cloud computing system 250, as the object identifier might have already been assigned to another virtual object within cloud computing system 250, including virtual objects allocated to other tenants. For example, as shown in FIG. 2B, the object identifier “aa-11-bb” associated with VM 208 on the private data center side has already been assigned to a VM 224 allocated to another tenant within cloud computing system 250, resulting in duplicate and conflicting identifiers. Instead, the migrated VM 208 is assigned a different object identifier “aa-11-bc.” (see [0030]) Shah et al. (US 20170316029 A1) discloses: The LUN (A) 412 may be identified as being part of the first consistency group (e.g., based upon the LUN (A) inode comprising the granset identifier 438, the hash table 444 mapping the LUN (A) 412 to the granset identifier 438, and/or the consistency group identifier 440 identifying the LUN (A) 412 as being within the first consistency group) and thus the first granset 430 is used to process access to the LUN (A) 412. Because the fencing property 436 specifies that rename operations are to be fenced, the rename operation 460 may be fenced 462 and thus blocked from being implemented upon the LUN (A) 412 (see [0071]). Sridhara (US 20170060705 A1) discloses: the LUN 506 may be renamed from the name “LUN_ABC” to a new name “LUN_123”. An attempt may be made to replicate/mirror the LUN rename operation, as configuration mirroring data, from the first storage cluster 502 to the second storage cluster 508 for application to the replicated LUN 512 (see [0057]). Buzzard et al. (US 20160085645 A1) discloses: a replication workflow may be created for the storage object based upon a change to the storage object by the storage operation (e.g., a new name for the first volume) (see [0022]). Nagineni (US 20120260036 A1) discloses: master host node renames the LUN 224 as DEF and informs the salve host node 204 of the name, Slave node 204 now refers to the LUN 224 by the name DEF (see [0036]). Kotov et al. (US 20230236865 A1) discloses: a UDEV label translation process is utilized to solve name conflict when local replication of a virtual tape file system occurs on backend storage system (see Figs. 3B, 4 and [0040]-[0043]). Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZHI CHEN whose telephone number is (571)272-0805. The examiner can normally be reached on M-F from 9:30AM to 5:30PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, April Y Blair can be reached on 571-270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center and Private PAIR to authorized users only. Should you have questions about access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form. /Zhi Chen/ Patent Examiner, AU2196 /APRIL Y BLAIR/Supervisory Patent Examiner, Art Unit 2196
Read full office action

Prosecution Timeline

Feb 22, 2023
Application Filed
Jul 24, 2025
Non-Final Rejection — §103, §DP
Oct 28, 2025
Response Filed
Feb 25, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596561
SYSTEM AND METHOD OF DYNAMICALLY ASSIGNING DEVICE TIERS BASED ON APPLICATION
2y 5m to grant Granted Apr 07, 2026
Patent 12596584
APPLICATION PROGRAMING INTERFACE TO INDICATE CONCURRENT WIRELESS CELL CAPABILITY
2y 5m to grant Granted Apr 07, 2026
Patent 12591461
ADAPTIVE SCHEDULING WITH DYNAMIC PARTITION-LOAD BALANCING FOR FAST PARTITION COMPILATION
2y 5m to grant Granted Mar 31, 2026
Patent 12585495
DISTRIBUTED COMPUTING PIPELINE PROCESSING
2y 5m to grant Granted Mar 24, 2026
Patent 12579012
FORWARD PROGRESS GUARANTEE USING SINGLE-LEVEL SYNCHRONIZATION AT INDIVIDUAL THREAD GRANULARITY
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
61%
Grant Probability
99%
With Interview (+40.5%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 250 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month