Prosecution Insights
Last updated: April 19, 2026
Application No. 18/089,830

VM Migration Using Memory Pointers

Non-Final OA §103
Filed
Dec 28, 2022
Examiner
AQUINO, WYNUEL S
Art Unit
2199
Tech Center
2100 — Computer Architecture & Software
Assignee
Microsoft Technology Licensing, LLC
OA Round
3 (Non-Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
340 granted / 433 resolved
+23.5% vs TC avg
Strong +21% interview lift
Without
With
+20.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
36 currently pending
Career history
469
Total Applications
across all art units

Statute-Specific Performance

§101
17.5%
-22.5% vs TC avg
§103
54.6%
+14.6% vs TC avg
§102
5.9%
-34.1% vs TC avg
§112
14.1%
-25.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 433 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/20/26 has been entered. Response to Arguments Applicant’s arguments with respect to claim(s) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC §103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim/s 1, 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tsai (Pat. No. US 11,734,038) in view of Santhaye (Pub. No. US 2024/0168796). Claim 1, 16 Tsai teaches “a method for migrating a virtual machine (VM) from being hosted by a first processing element to being hosted by a second processing element, said method comprising determining that the VM is to be migrated from being hosted by the first processing element to being hosted by the second processing element, wherein the first processing element and the second processing element have access to a common pool of data storage resources, and wherein VM data for the VM is stored at a first storage location in the common pool of data storage resources; triggering a migration of the VM from initially being hosted by the first processing element to subsequently being hosted by the second processing element ([Col. 40, Lines 31-43] (176) At 1204, an instruction to migrate the virtualized resource from the source host to a target host may be received. In some implementations, the virtualized resource may be operating on the source host. In some cases, a migration instruction may be received from a network orchestrator. The migration instruction may include, for instance, a key for encoding data. In some cases, the source host and the target host may be located in different data centers that are separated via one or more external networks (e.g., the Internet). For instance, at least one of the source host or the target host may be located in an edge location. The primary replica and the secondary replica(s) may be located in the same data center and/or availability zone. [Fig. 11A] block storage storing replica); as a part of the migration of the VM, preventing the VM data from being transferred from the first storage location to a second storage location such that the VM data is caused to remain at the first storage location despite the migration of the VM to the second processing element ([Col. 5, Lines 15-25] (32) To ensure that the block storage volume is maintained in the event of various network interruptions, at least two replicas of the block storage volume may be stored within the fleet. A “primary” replica of the block storage volume may directly receive and serve access requests from the client. A “secondary” replica of the block storage volume may receive, from the primary copy, duplicated write requests among the access requests.); as a part of the migration of the VM, generating or accessing a memory pointer that points to the first storage location in the common pool of data storage resources ([Col. 42, Lines 46-52] (187) At 1222, the block storage volume may be attached to a target block storage client associated with the virtualized resource as the virtualized resource is operating on a target host. In various implementations, the block storage volume may be attached (e.g., “multi-attached”) to both the source block storage client and the target block storage client, simultaneously.); causing the second processing element to host a new VM, the new VM being a migrated version of the VM ([Col. 9, Line 61 – Col. 10, Line 3] The network orchestrator 132 may cause the source host to migrate the virtualized resource to the target host. In some cases, the network orchestrator 132 may instruct the source host to transfer data associated with the virtualized resource to the target host. Further, in some cases, the network orchestrator 132 may receive a confirmation, from the target host, that the virtualized resource has been successfully migrated. Thus, the network orchestrator 132 may orchestrate the movement of virtualized resources throughout the environment 100.); and configuring the new VM to access the first storage location via the memory pointer, wherein the new VM uses the memory pointer to access the VM data, which continuously remained stored at the first storage location during the migration of the VM ([Col. 35, Lines 38-42] Accordingly, a target block storage client 1118 may be established on the target host 1106, which may enable the virtual machine 202 to access the block storage data as the virtualized resource operates on the target host 1106.) and after the migration of the VM, such that the VM data remains at the first storage location even after the migration is complete ([Col. 38, Lines 14-35] (166) In general, once the virtual machine 202 is operating on the target host 1106, the primary replica 1112 is primarily accessed by the target block storage client 1118, rather than the source block storage client 1116. However, the source block storage client 1116 may remain configured to access the primary replica 1112 during the post-copy process. For example, the primary replica 1112 may include lease information that enables the primary replica 1112 to accept and/or respond to access requests from both the source block storage client 1116 and the target block storage client 1118, even if the primary replica 1112 may only receive access requests from the target block storage client 1118 as the virtual machine 202 is operating on the target host 1106. In some cases, the source block storage client 1116 and the target block storage client 1118 may each include lease information that enables each one of the source block storage client 1116 and the target block storage client 1118 to forward access requests to the primary replica 1112. For example, the source block storage client 1116 and the target block storage client 1118 may store routing information and/or at least one security key that enables the access requests to be received and parsed by the primary replica 1112.)”. However, Tsai may not explicitly teach the added limitations. Santhaye teaches “as a part of the migration of the VM, preventing the VM data from being transferred from the first storage location to a second storage location such that the VM data is caused to continuously remain, throughout an entirety of said migration, at the first storage location despite the migration of the VM to the second processing element, and such that the VM data continuously remains at the first storage location without being transferred or copied during said migration ([Fig. 2A-2C, 0067] Storage 30 may facilitate an enterprise storing its enterprise information and operational data 32 separate from computing resources provided by public computing resource providers 10A-10n. If enterprise 6, or the enterprise's computing system 4, determines to change from one of providers 10A-10n to another of providers 10A-10n for the providing of computing resources that augment private computing system 4, data 32 does not have to be transferred from the one public computing resources provider to the other. [0068] By an organization, such as enterprise 6, storing its data 32 at third-party storage facility 30 (the storage is operated by a ‘third-party’ in the sense that storage 30 is not part of public computing systems 8A-8n) the organization may change providers of computing resources dynamically, or almost ‘on-the-fly’, because data 32 stays at storage 30 and thus data egress costs are reduced, if not eliminated. [0071] FIG. 2C illustrates system 2 with workload 34 being rendered in solid lines and centered over, and connected to, provider 10B and thus to its corresponding public computing system 8B at time t.sub.0+t.sub.p. to indicate that the workload has been transitioned to use computing system resources from public computing system 8B. Data 32 has not transitioned, or migrated, from storage 30. Rather, data 32 remains at storage 30 and ‘follows’ workload 34 without moving from storage 30 as the workload migrates from donor computing system 8A to recipient computing system 8B as shown by the broken line connecting enterprise data block 32 and enterprise workload 34 at the workload's new, or recipient, public computing system 8B.)”. It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Santhaye with the teachings of Tsai in order to provide a system that teaches data of VM may stay at a current location during migration. The motivation for applying Santhaye teaching with Tsai teaching is to provide a system that allows for design choice. Tsai, Santhaye are analogous art directed towards migration methods. Together Tsai, Santhaye teach every limitation of the claimed invention. Since the teachings were analogous art known at the filing time of invention, one of ordinary skill could have applied the teachings of Santhaye with the teachings of Tsai by known methods and gained expected results. Claim 2, Tsai teaches “the method of claim 1, wherein the first processing element is a first system on a chip (SOC), and wherein the second processing element is a second SOC ([Col. 49, Lines 5-13](220) The cloud provider network 1602 can provide on-demand, scalable computing platforms to users through a network, for example allowing users to have at their disposal scalable “virtual computing devices” via their use of the compute servers 1604 (which provide compute instances via the usage of one or both of CPUs and GPUs, optionally with local storage) and block store servers 1606 (which provide virtualized persistent block storage for designated compute instances).)”. Claim 4, Tsai teaches “the method of claim 1, wherein the common pool of data storage resources includes a plurality of solid state drives (SSDs) ([Col. 49, Lines 5-18] (220) The cloud provider network 1602 can provide on-demand, scalable computing platforms to users through a network, for example allowing users to have at their disposal scalable “virtual computing devices” via their use of the compute servers 1604 (which provide compute instances via the usage of one or both of CPUs and GPUs, optionally with local storage) and block store servers 1606 (which provide virtualized persistent block storage for designated compute instances). These virtual computing devices have attributes of a personal computing device including hardware (various types of processors, local memory, random access memory (“RAM”), hard-disk and/or solid state drive (“SSD”) storage), a choice of operating systems, networking capabilities, and pre-loaded application software.)”. Claim 5, Tsai teaches “the method of claim 4, wherein an interposer is logically disposed between (i) the first processing element and the second processing element and the plurality of SSDs ([Fig. 6] Tunnel and controller)”. Claim 11, Tsai teaches “the method of claim 1, wherein the first processing element operates using a first operating system (OS), and wherein the second processing element operates using a second OS ([Col. 33, Lines 12-21] While multiple virtual machines can run on one physical machine, each virtual machine typically has its own copy of an operating system, as well as the applications and their related files, libraries, and dependencies. Virtual machines are commonly referred to as compute instances or simply “instances.” Some containers can be run on instances that are running a container agent, and some containers can be run on bare-metal servers. Accordingly, a virtual resource connected to a volume through a client can include virtual machines and/or containers.)”. Claim 15, Tsai teaches “the method of claim 1, wherein the memory pointer is an existing memory pointer that the VM used to access the common pool of data storage resources while the VM was being hosted by the first processing element ([Col. 42, Lines 46-52] (187) At 1222, the block storage volume may be attached to a target block storage client associated with the virtualized resource as the virtualized resource is operating on a target host. In various implementations, the block storage volume may be attached (e.g., “multi-attached”) to both the source block storage client and the target block storage client, simultaneously.)”. Claim 20, “a method for migrating a virtual machine (VM) from being hosted by a first processing element to being hosted by a second processing element, said method comprising: determining that the VM is to be migrated from being hosted by the first processing element to being hosted by the second processing element, wherein: the first processing element and the second processing element have access to a common pool of data storage resources, VM data for the VM is stored at a first storage location in the common pool of data storage resources, and an interposer is logically disposed between (1) the first processing element and the second processing element and (11) the common pool of data storage resources; triggering a migration of the VM from initially being hosted by the first processing element to subsequently being hosted by the second processing element as a part of the migration of the VM, preventing the VM data from being transferred from the first storage location to a second storage location such that the VM data is caused to continuously remain, through an entirety of said migration, at the first storage location despite the migration of the VM to the second processing element, and such that the VM data continuously remains at the first storage location without being transferred or copied during said migration; as a part of the migration of the VM, generating or accessing a memory pointer that points to the first storage location in the common pool of data storage resources; causing the second processing element to host a new VM, the new VM being a migrated version of the VM; configuring the new VM to access the first storage location via the memory pointer, wherein the new VM uses the memory pointer to access the VM data, which continuously remained stored at the first storage location during the migration of the VM and after the migration of the VM, such that the VM data remains at the first storage location even after the migration is complete” is similar to claim 1 and claim 5 and therefore rejected with the same references and citations. Claim/s 3, 8, 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tsai, Santhaye in view of Mullender. Claim 3, Tsai may not explicitly teach the limitations of the claim. Mullender teaches “the method of claim 2, wherein the first SOC is a first node in a multi-socket platform, and wherein the second SOC is a second node in the multi-socket platform ([0055] Computing system 420 may be a single-socket server including only CPU 422 or a multi-socket server including CPU 422 and one or more other CPUs.)”. It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Mullender with the teachings of Tsai, Santhaye in order to provide a system that teaches multi-socket platform. The motivation for applying Mullender teaching with Tsai, Santhaye teaching is to provide a system that allows for design choice. Tsai, Santhaye, Mullender are analogous art directed towards processing systems. Together Tsai, Santhaye, Mullender teach every limitation of the claimed invention. Since the teachings were analogous art known at the filing time of invention, one of ordinary skill could have applied the teachings of Mullender with the teachings of Tsai, Santhaye by known methods and gained expected results. Claim 8, Tsai may not explicitly teach the limitations of the claim. Mullender teaches “the method of claim 1, wherein the first processing element is a first node is a multi-socket platform and the second processing element is a second node in the multi-socket platform, and wherein the multi-socket platform is a partitioned platform ([0055] Computing system 420 may be a single-socket server including only CPU 422 or a multi-socket server including CPU 422 and one or more other CPUs. [Fig. 1] partitioned elements)”. Rationale to claim 3 is applied here. Claim 12, Tsai may not explicitly teach the limitations of the claim. Mullender teaches “the method of claim 1, wherein the first processing element is a first node in a multi-socket platform and the second processing element is a second node in the multi-socket platform, and wherein the multi-socket platform is a coherent platform ([0055] Computing system 420 may be a single-socket server including only CPU 422 or a multi-socket server including CPU 422 and one or more other CPUs. [Fig. 1] partitioned elements)”. Rationale to claim 3 is applied here. Claim/s 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tsai, Santhaye in view of Gooding (Pub. No. US 2019/0332318). Claim 6, Tsai may not explicitly teach the limitations of the claim. Gooding teaches “the method of claim 5, wherein the interposer causes the plurality of SSDs to appear to be contiguous to the first processing element and the second processing element ([0026] FIG. 1 depicts a typical shared checkpoint application 60 upon which the present invention improves. As shown in FIG. 1, plural nodes 61 of an HPC system are depicted writing respective application data 64 (e.g., checkpoint data) to a parallel shared file 75. [0053] If, at 750, it is determined that the partial block is full, then at 755, the bsfcsAgent process writes the full, contiguous block to the corresponding data file in the SSD file system at the node.)”. It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Gooding with the teachings of Tsai, Santhaye in order to provide a system that teaches different types of memory. The motivation for applying Gooding teaching with Tsai, Santhaye teaching is to provide a system that allows for design choice. Tsai, Santhaye, Gooding are analogous art directed towards processing systems. Together Tsai, Santhaye, Gooding teach every limitation of the claimed invention. Since the teachings were analogous art known at the filing time of invention, one of ordinary skill could have applied the teachings of Gooding with the teachings of Tsai, Santhaye by known methods and gained expected results. Claim/s 7, 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tsai, Santhaye in view of Tsirkin (Pub. No. US 2018/0276145). Claim 7, 18, Tsai may not explicitly teach the limitations of the claim. Tsirkin teaches “the method of claim 1, wherein an encryption of the VM data remains unchanged during the migration of the VM ([0034] In one example, cryptographic initiation module 223 may initiate underlying support for the migration of the encrypted content without an intent to actually migrate the encrypted data. Instead, cryptographic initiation module 223 may use the migration functionality to cause the encrypted data to be encrypted using the common cryptographic input so that data deduplication component 124 can detect and remove duplicate data.)”. It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Tsirkin with the teachings of Tsai, Santhaye in order to provide a system that teaches status of Tsai memory during migration. The motivation for applying Tsirkin, Santhaye teaching with Tsai teaching is to provide a system that allows for design choice. Tsai, Santhaye, Tsirkin are analogous art directed towards processing systems. Together Tsai, Santhaye, Tsirkin teach every limitation of the claimed invention. Since the teachings were analogous art known at the filing time of invention, one of ordinary skill could have applied the teachings of Tsirkin with the teachings of Tsai, Santhaye by known methods and gained expected results. Claim/s 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tsai, Santhaye in view of Sainath (Pub. No. US 2018/0136985). Claim 9, Tsai may not explicitly teach the limitations of the claim. Sainath teaches “the method of claim 1, wherein the first processing element and the second processing element operate under control of a single hypervisor ([0050] In embodiments, a single hypervisor may manage the plurality of physical servers at block 408. Generally, the hypervisor may include a piece of computer software (e.g., program, application, firmware, module) or computer hardware to create and manage virtual machines. The hypervisor may be configured to create a number of virtual machines each having different operating systems and virtual operating platforms for managing deployed assets and workloads. In embodiments, aspects of the disclosure relate to using a single hypervisor to manage a plurality of physical servers.)”. It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Sainath with the teachings of Tsai, Santhaye in order to provide a system that teaches different types of management systems. The motivation for applying Sainath teaching with Tsai, Santhaye teaching is to provide a system that allows for design choice. Tsai, Sainath are analogous art directed towards processing systems. Together Tsai, Santhaye, Sainath teach every limitation of the claimed invention. Since the teachings were analogous art known at the filing time of invention, one of ordinary skill could have applied the teachings of Sainath with the teachings of Tsai, Santhaye by known methods and gained expected results. Claim/s 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tsai, Santhaye in view of Salle (Pub. No. US 2012/0016778). Claim 10, Tsai may not explicitly teach the limitations of the claim. Salle teaches “the method of claim 1, wherein the first processing element and the second processing element operate under control of a single operating system ([0023] Cloud services controller system 110 exists in one sense as an operating system for executing a plurality of services, via service portals 205, which provide capabilities for various roles and their responsibilities within cloud platform 100. Thus, within the single infrastructure of cloud services controller system 110, service portals 205 are presented in an interlinked and orchestrated fashion for a variety of role-players, including: customers, designers, administrators, operations manages, and business managers. [0016] The cloud services controller system provides a centralized and highly integrated system which supports a plurality of portals for exchanging information between the various roles in a cloud services market in order to fulfill responsibilities. This allows management of the lifecycle of user services which are broadly defined as ranging from physical servers to virtual servers, to databases, to web servers, to web applications and services, to accounts.)”. It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Salle with the teachings of Tsai, Santhaye in order to provide a system that teaches different types of management systems. The motivation for applying Salle teaching with Tsai, Santhaye teaching is to provide a system that allows for design choice. Tsai, Santhaye, Sainath are analogous art directed towards processing systems. Together Tsai, Santhaye, Salle teach every limitation of the claimed invention. Since the teachings were analogous art known at the filing time of invention, one of ordinary skill could have applied the teachings of Salle with the teachings of Tsai, Santhaye by known methods and gained expected results. Claim/s 13, 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tsai, Santhaye in view of Bafna (Pub. No. US 2018/0157522). Claim 13, Tsai may not explicitly teach the limitations of the claim. Bafna teaches “the method of claim 1, wherein the memory pointer is a directory pointer ([0050] In particular embodiments, storage items such as files and folders in a file server namespace may be accessed by clients such as user VMs 101 by name, e.g., “\Folder-1\File-1” and “\Folder-2\File-2” for two different files named File-1 and File-2 in the folders Folder-1 and Folder-2, respectively (where Folder-1 and Folder-2 are sub-folders of the root folder). Names that identify files in the namespace using folder names and file names may be referred to as “path names.”)”. It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Bafna with the teachings of Tsai, Santhaye in order to provide a system that teaches different types of client-based systems. The motivation for applying Bafna teaching with Tsai, Santhaye teaching is to provide a system that allows for design choice. Tsai, Santhaye, Bafna are analogous art directed towards processing systems. Together Tsai, Santhaye, Bafna teaches every limitation of the claimed invention. Since the teachings were analogous art known at the filing time of invention, one of ordinary skill could have applied the teachings of Bafna with the teachings of Tsai, Santhaye by known methods and gained expected results. Claim 14, Tsai may not explicitly teach the limitations of the claim. Bafna teaches “the method of claim 1, wherein the memory pointer is a file system pointer ([0050] In particular embodiments, storage items such as files and folders in a file server namespace may be accessed by clients such as user VMs 101 by name, e.g., “\Folder-1\File-1” and “\Folder-2\File-2” for two different files named File-1 and File-2 in the folders Folder-1 and Folder-2, respectively (where Folder-1 and Folder-2 are sub-folders of the root folder). Names that identify files in the namespace using folder names and file names may be referred to as “path names.”)”. It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Bafna with the teachings of Tsai, Santhaye in order to provide a system that teaches different types of client-based systems. The motivation for applying Bafna teaching with Tsai, Santhaye teaching is to provide a system that allows for design choice. Tsai, Santhaye, Bafna are analogous art directed towards processing systems. Together Tsai, Santhaye, Bafna teaches every limitation of the claimed invention. Since the teachings were analogous art known at the filing time of invention, one of ordinary skill could have applied the teachings of Bafna with the teachings of Tsai, Santhaye by known methods and gained expected results. Claim/s 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tsai, Santhaye in view of Balasubramanian (Pat. No. US 8,285,817). Claim 17, Tsai teaches “the computer system of claim 16, wherein preventing the VM data from being transferred from the first storage location to the second storage location involves refraining from performing a data cut and paste operation or a data copy and paste operation ([Col. 5, Lines 15-25] (32) To ensure that the block storage volume is maintained in the event of various network interruptions, at least two replicas of the block storage volume may be stored within the fleet. A “primary” replica of the block storage volume may directly receive and serve access requests from the client. A “secondary” replica of the block storage volume may receive, from the primary copy, duplicated write requests among the access requests. Examiner notes Balasubramanian teaches a migration is referred to as copy and paste and therefore it would be obvious to one ordinarily skilled in the art the replica is maintained and therefore refrained from copy and paste [Col. 12, Lines 34] specifying whether the migration is a copy-paste or copy-paste-delete operation;)”. Claim/s 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tsai, Santhaye in view of Sumida (Pub. No. US 2017/0003977). Claim 19, Tsai may not explicitly teach the limitations of the claim. Sumida teaches “the computer system of claim 16, wherein the VM, upon initialization at the first processing element, is allocated virtual memory, which includes virtual memory corresponding to the first storage location, and wherein the new VM is allocated the same virtual memory that was allocated to the VM ([0041] For example, it is assumed that the virtual machines VM2, VM3, VM1, and VM4 have been booted up on the physical machine PM1 of FIG. 2. Due to this, the hypervisor HV1 allocates memory areas MEM_VM1 to MEM_VM4 in the memory 12 to the four virtual machines VM1 to VM4, respectively.)”. It would have been obvious to one of ordinary skill in the art at the time the invention was filed to apply the teachings of Sumida with the teachings of Tsai, Santhaye in order to provide a system that teaches memory allocated to different nodes may be shared, such that the storage location is accessed by both client-based systems Tsai. The motivation for applying Sumida teaching with Tsai, Santhaye teaching is to provide a system that allows for design choice. Tsai, Santhaye, Sumida are analogous art directed towards processing systems. Together Tsai, Santhaye, Sumida teaches every limitation of the claimed invention. Since the teachings were analogous art known at the filing time of invention, one of ordinary skill could have applied the teachings of Sumida with the teachings of Tsai, Santhaye by known methods and gained expected results. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to WYNUEL S AQUINO whose telephone number is (571)272-7478. The examiner can normally be reached 9AM-5PM EST M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lewis Bullock can be reached at 571-272-3759. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WYNUEL S AQUINO/Primary Examiner, Art Unit 2199
Read full office action

Prosecution Timeline

Dec 28, 2022
Application Filed
May 16, 2025
Non-Final Rejection — §103
Aug 05, 2025
Response Filed
Oct 17, 2025
Final Rejection — §103
Jan 20, 2026
Request for Continued Examination
Jan 27, 2026
Response after Non-Final Action
Feb 05, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596570
OPTIMIZED STORAGE CACHING FOR COMPUTER CLUSTERS USING METADATA
2y 5m to grant Granted Apr 07, 2026
Patent 12596567
HIGH AVAILABILITY CONTROL PLANE NODE FOR CONTAINER-BASED CLUSTERS
2y 5m to grant Granted Apr 07, 2026
Patent 12585568
METHODS AND APPARATUS TO PERFORM INSTRUCTION-LEVEL GRAPHICS PROCESSING UNIT (GPU) PROFILING BASED ON BINARY INSTRUMENTATION
2y 5m to grant Granted Mar 24, 2026
Patent 12572675
ACCESSING FILE SYSTEMS IN A VIRTUAL ENVIRONMENT
2y 5m to grant Granted Mar 10, 2026
Patent 12566639
TECHNIQUES FOR AUTO-TUNING COMPUTE LOAD RESOURCES
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
99%
With Interview (+20.6%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 433 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month