Prosecution Insights
Last updated: April 19, 2026
Application No. 17/976,596

HIGH PERFORMANCE NODE-TO-NODE PROCESS MIGRATION

Final Rejection §103
Filed
Oct 28, 2022
Examiner
AYERS, MICHAEL W
Art Unit
2195
Tech Center
2100 — Computer Architecture & Software
Assignee
Nutanix, Inc.
OA Round
2 (Final)
70%
Grant Probability
Favorable
3-4
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
200 granted / 287 resolved
+14.7% vs TC avg
Strong +56% interview lift
Without
With
+56.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
37 currently pending
Career history
324
Total Applications
across all art units

Statute-Specific Performance

§101
14.8%
-25.2% vs TC avg
§103
47.3%
+7.3% vs TC avg
§102
2.9%
-37.1% vs TC avg
§112
25.6%
-14.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 287 resolved cases

Office Action

§103
DETAILED ACTION This office action is in response to claims filed 17 November 2025. Claims 1-35 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, regarding the rejections made under 35 U.S.C. 112 and 35 U.S.C. 101 are persuasive, and the rejections have been withdrawn. Applicant's arguments regarding the rejections made under 35 U.S.C. 103 have been fully considered but they are not persuasive. On page 23 of the remarks, applicant argues: “As can be seen in Ramanathan, para. [0057] above, the concepts of ‘page recency metadata’ and page recency metadata ‘based on when respective pages of the computing process were accessed’ do not appear within the cited portion of Ramanathan. Instead, the cited portion of Ramanathan refers to ‘page numbers’ of ‘dirty’ or modified pages… “Ramanathan does not refer to access times or when pages were accessed.” The examiner respectfully disagrees. The claim recites in part: “retrieving page recency metadata…the page recency metadata being based on when respective pages of the computing process were accessed.” Notably, the claimed page recency data merely requires that it be merely “based” on when respective pages of the computing process were accessed. No mention of “access time” is made, nor is any detail given as to what aspects of when the pages were accessed comprise the metadata. Turning to RAMANATHAN, regarding the limitations at issue, RAMANATHAN teaches: “Method 800 begins at step 802, where VM migration module 138 in the source host tracks dirty pages in a changed bitmap. As discussed above, VM migration module 138 in the source will execute a pre-copy of the memory over several iterations prior to switch-over” ([0057]). In summary a changed bitmap tracks which pages were recently accessed and changed, or “dirtied” only during a switch-over of a VM migration process. In other words, the dirty bitmap is concerned with “when” pages were accessed; if the pages were accessed during the switch-over, they are “dirty”. The migration process would not mark a page as “dirty” if it was accessed outside of the switch-over, since there would be no need to keep track of such pages for migration purposes. As such, RAMANATHAN does retrieve metadata that is at least partially based on “when pages were accessed”, and therefore the applicant’s argument is not persuasive. Further, the remaining arguments step from this issue, and are similarly not persuasive. Allowable Subject Matter Claims 30-32 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim Objections Claim 30 is objected to because of the following informalities: In line 1, “clai2928” should read “claim 29”. Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 7-11, 15-19, 23-24, and 33-35 are rejected under 35 U.S.C. 103 as being unpatentable over RAMANATHAN et al. Pub. No.: US 2022/0066806 A1 (hereafter RAMANATHAN), in view of TSAI et. al. Patent No.: US 11,461,123 B1 (hereafter TSAI). RAMANTHAN and TSAI were cited previously. Regarding claim 1, RAMANATHAN teaches: A non-transitory computer readable medium having stored thereon a sequence of instructions which, when stored in memory and executed by a processor cause the processor to perform acts executed in a multi-node computing environment ([0004] Embodiments include a non-transitory computer-readable storage medium comprising instructions that cause a computer system to carry out the above method) comprising… migrat[ing] a computing process from a first computing node to a second computing node by: copying contents of one or more pages of the computing process from the first node to the second computing node ([0017] Techniques for memory copy during virtual machine (VM) (i.e., “computing process”) migration in a virtualized computing system are described [0029] At step 307, the VM migration software performs the pre-copy phase. During the pre-copy phase, the VM migration software copies the VM memory from the source host to the destination host in an iterative process. The first iteration copies all the VM memory pages from source to destination), the computing process comprising a sequence of executable instructions that access code ([0025] Each VM 120 includes guest software (also referred to as guest code) that runs on the virtualized resources supported by hardware platform 106 (i.e., guest code is accessed by the virtual machine for execution)) or data organized into a plurality of pages ([0017] The hypervisor allocates a portion of the system memory to each VM (“VM memory”). The hypervisor logically formats the VM memory into VM memory pages and maintains page tables that map the VM memory pages to machine addresses in the system memory (i.e., each VM accesses data stored into pages of their allocated portion of system memory)); retrieving page recency metadata corresponding to the one or more pages of the computing process from an operating system of the first computing node, the page recency metadata being based on when respective pages of the computing process were accessed ([0057] Method 800 begins at step 802, where VM migration module 138 in the source host tracks dirty pages in a changed bitmap. As discussed above, VM migration module 138 in the source will execute a pre-copy of the memory over several iterations prior to switch-over (i.e., changed bitmap represents a data structure of metadata corresponding to dirty pages, or pages which have recently been changed during the several iterations of a pre-copy phase). [0024] Hypervisor 118 includes a kernel 134, kernel modules 136, user modules 140, and virtual machine monitors (VMMs) 142. [0026] Kernel 134 provides operating system functionality (e.g., process creation and control, file system, process threads, etc.), as well as CPU scheduling and memory scheduling across guest software in VMs 120, VMMs 142, kernel modules 136, and user modules 140 (i.e., the hypervisor kernel, or operating system, monitors, and thereby “retrieves” the changed bitmap via the VM migration module 138)); and copying at least a portion of the retrieved page recency metadata from the first computing node to the second computing node ([0057] At some point, VM migration module 138 initiates switch-over when some threshold amount of dirtied pages exist (step 804). At step 806, VM migration module 138 in the source walks the changed bitmap to determine the page numbers of the changed pages. At step 808, VM migration module 138 in the source transmits the page numbers of the changed pages (i.e., recency metadata of the changed pages in the changed bitmap) to VM migration module 138 in the destination host). While RAMANATHAN discusses migrating a virtual machine by copying pages in pre and post copy phases, RAMANATHAN does not explicitly teach: receiving a request to migrate a computing process from a first computing node to a second computing node; and responding to the request to migrate the computing process from the first computing node to the second computing node by [migrating the computing process]. However, in analogous art that similarly teaches migrating virtual machines, TSAI teaches: receiving a request to migrate a computing process from a first computing node to a second computing node ([Column 23, Lines 1-2] At 502, an instruction to migrate a virtualized resource from a source host to a target host may be received); and responding to the request to migrate the computing process from the first computing node to the second computing node by [migrating the computing process] ([Column 23, Lines 28-46] At 506, the first portion of the data may be transmitted from the source host to the target host over at least one external network…At 510, a second portion of the data associated with the virtualized resource may be transmitted from the source host to the target host over the external network(s). The second portion of the data may include a remaining portion of the data that was not transmitted at 506 (i.e., transferring all data portions migrates the virtualized resource from source to target host)). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined TSAI’s teaching of responding to a migration request by migrating a virtualized resource, with RAMANATHAN’s teaching of migrating a virtualized resource, to realize, with a reasonable expectation of success, a system that migrates virtualized resources, as in RAMANATHAN, in response to receiving a request to migrate, as in TSAI. A person having ordinary skill would have been motivated to make this combination to enable a user more control over virtualized resources to improve performance (TSAI Column 7, Line 63-Column 8, Line 9). Regarding claim 2, RAMANATHAN further teaches: accessing, by a central processing unit (CPU) of the second computing node, at least some of the copied pages of the computing process, based at least in part on the copied page recency metadata([0029] During switch-over, the VM migration software transfers the last set of VM memory pages, restores device states, and initiates VM resume on the destination. The VMM in the destination host then restores virtual CPU states and the guest software continues execution (i.e., in restoring virtual CPU states and continuing guest software execution, the destination accesses the transferred memory pages, including the recently dirty pages, as it did at the source)). Regarding claim 3, RAMANATHAN further teaches: converting the at least a portion of the retrieved page recency metadata into a second representation ([0022] MMU 132 translates virtual addresses in the virtual address space (also referred to as virtual page numbers) into physical addresses of system memory 110 (also referred to as machine page numbers)). Regarding claim 7, RAMANATHAN further teaches: the computing process comprises at least one of a virtual machine ([0017] Techniques for memory copy during virtual machine (VM) migration in a virtualized computing system are described), a guest operating system ([0063] Each virtual machine includes a guest operating system in which at least one application runs), and an executable container ([0063] It should be noted that these embodiments may also apply to other examples of contexts, such as containers). Regarding claim 8, RAMANATHAN further teaches: wherein the page recency metadata is derived from most recent page access information ([0057] Method 800 begins at step 802, where VM migration module 138 in the source host tracks dirty pages in a changed bitmap. As discussed above, VM migration module 138 in the source will execute a pre-copy of the memory over several iterations prior to switch-over. At some point, VM migration module 138 initiates switch-over when some threshold amount of dirtied pages exist (step 804) (i.e., dirty pages are tracked in the changed bitmap and are determined based on information indicating that a given page was modified in a previous iteration, representing “most recent page access information”)). Regarding claims 9-11, 15-19, and 23-24, they comprise limitations similar to those of claims 1-3, and 7-8, and are therefore rejected for at least similar rationale. Regarding claim 33, RAMANATHAN further teaches: an act of receiving a cutover signal at the first computing node, the copied computing process being executed by second computing node based at least in part on the copied page recency metadata ([0028] At step 309, the VM migration software transfers a final set of VM memory pages from the source host to the destination host. At step 310, the VM migration software resumes the VM on the destination host. For example, the VM migration software resumes (e.g., starts) VM 120D on host computer 102D. Steps 306, 308, 309, and 310 of method 300 are referred to as “switch-over” during the migration process (i.e., final transfer of memory pages represents a switchover, or “cutover” signal causing the VM to resume execution at the destination host)). Regarding claim 34, RAMANATHAN further teaches: wherein the cutover signal is received after a portion of all of the pages and a portion of all of the page recency metadata have been copied from the first computing node to the second computing node ([0057] At some point, VM migration module 138 initiates switch-over when some threshold amount of dirtied pages exist (step 804). At step 806, VM migration module 138 in the source walks the changed bitmap to determine the page numbers of the changed pages. At step 808, VM migration module 138 in the source transmits the page numbers of the changed pages to VM migration module 138 in the destination host. [0028] At step 309, the VM migration software transfers a final set of VM memory pages from the source host to the destination host. At step 310, the VM migration software resumes the VM on the destination host. For example, the VM migration software resumes (e.g., starts) VM 120D on host computer 102D. Steps 306, 308, 309, and 310 of method 300 are referred to as “switch-over” during the migration process (i.e., switching, or “cutting” over to the destination host occurs after at least a set of VM memory pages is transferred, as well as at least a portion of changed page numbers)). Regarding claim 35, TSAI further teaches: deleting pages of the computing process ([Column 39, Lines 56-65] Once the virtualized resource data has been fully transferred to the target host 1106 and the block storage target host(s) 1120, the virtual machine 202 may be fully migrated to the target data center(s) 1108. Once the host orchestrator 142 identifies that the virtual machine 202 has been fully migrated to the target data center(s) 1108, the host orchestrator 142 may cause the source host 1102 and the block storage source host(s) 1110 to delete the remaining copy of the virtualized resource data (i.e., “pages of the computing process”) stored in the source data center(s) 1104). RAMANATHAN further teaches: deleting page recency metadata from the first computing node in response to receiving the cutover signal ([0037] In embodiments, at step 503, VM migration module 138 clears the dirty page tracking bitmap after selecting all pages for transmission). Claims 4, 12, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over RAMANATHAN, in view of TSAI, as applied to claims 1, 9, and 17 above, and in further view of QIAN Pub. No.: US 2016/0294549 A1 (hereafter QIAN). QIAN was cited previously. Regarding claim 4, while RAMANATHAN and TSAI discuss transferring page recency metadata between computing nodes, RAMANATHAN and TSAI do not explicitly teach: the at least a portion of the retrieved page recency metadata in the second representation is encrypted at the first computing node before initiating the copying of the at least a portion of the retrieved page recency metadata to the second computing node. However, in analogous art that similarly teaches copying of data from source to target nodes, QIAN teaches: the at least a portion of the retrieved [metadata] in the second representation is encrypted at the first computing node before initiating the copying of at least a portion of the retrieved [metadata] to the second computing node ([0062] In steps 305, the migration platform 103 retrieves a master key associated with the target database. In addition, the platform retrieves the master key associated with the source database based on the request. In another step 307, the platform 103 encrypts the envelope key (i.e., envelop key represents metadata that is encrypted) based on the master key associated with the target database, [0063] In step 309 of process 308 (FIG. 3B), the migration platform 103 migrates the data from the source database to the target database based on the encryption of the envelope key, the execution of the one or more threads, or a combination thereof. Hence, the one or more threads may perform the migration of the data or sets thereof concurrently (i.e., envelop key is encrypted prior to copying the encrypted envelop key to the target database)). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined QIAN’s teaching of encrypting metadata prior to migrating the metadata, with RAMANATHAN and TSAI’s teaching of migrating metadata indicative of recent page access, to realize, with a reasonable expectation of success, a system that prior to transferring metadata indicative of recent page accesses, as in RAMANATHAN, encrypts the metadata, as in QIAN. A person having ordinary skill would have been motivated to make this combination to more efficiently and securely migrate metadata (QIAN [0001]). Regarding claims 12, and 20, they comprise limitations similar to those of claim 4, and are therefore rejected for at least similar rationale. Claims 5-6, 13-14, and 21-22 are rejected under 35 U.S.C. 103 as being unpatentable over RAMANATHAN, in view of TSAI, as applied to claims 1, 9, and 17 above, and in further view of GUPTA et al. Pub. No.: US 2015/0229717 A1 (hereafter GUPTA). GUPTA was cited previously. Regarding claim 5, while RAMANATHAN and TSAI discuss transferring of page recency metadata, RAMANTHAN and TSAI do not explicitly teach: a first portion of the retrieved page recency metadata is transmitted to the second computing node before transmitting a second portion of the retrieved page recency metadata. However, in analogous art, that similarly discusses virtual machine migration, GUPTA teaches: a first portion of the retrieved page recency metadata is transmitted to the second computing node before transmitting a second portion of the retrieved page recency metadata ([0025] The target cache state migration application 208 may request the source cache state migration application 210 to send pre-fetch hints. Based on receiving the notification, the source cache state migration application 210 sends metadata regarding pre-fetch hints about data stored in the source cache 214 (e.g., pointers to data locations in the shared storage 206) to the target cache state migration application 208 executing on the target compute node 204 (i.e., metadata regarding pre-fetch hints represents “first portion of metadata” that is transmitted to the target node cache). [0026] In an embodiment, the source cache state migration application 210 sends one or more rounds of additional pre-fetch hints (also referred to herein as intermediate pre-fetch hints) to the target cache state migration application 208 (i.e., subsequent rounds of transmitting metadata represents transmitting “second portions of metadata”). The sending of these additional pre-fetch hints may cause the target cache state migration application 208 to remove some of the previously pre-fetched blocks from the target cache 218 (e.g., if the source cache state migration application 210 indicates that they are “cold” or otherwise no longer required to be cached). In addition, the sending of additional pre-fetch hints may cause the target cache state migration application 208 to invalidate and re-fetch some of the pre-fetched blocks in the target cache 218 if, for example, they have been overwritten since the last round of hints from the source cache state migration application 210. Further, the sending of additional pre-fetch hints to the target cache state migration application 208 may cause new data blocks to be fetched from the shared storage 206 and stored in the target cache 218 (e.g., the source cache state migration application 210 indicates that the data blocks are “hot” and belong in the target cache 218) (i.e., metadata of the pre-fetch hints are indicative of how recently data has been accessed (hot or cold) or whether the data has been recently overwritten, and therefore represents “page recency metadata”)). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined GUPTA’s teaching of transferring page recency metadata between source and target nodes in multiple rounds when migrating virtual machines, with RAMANATHAN and TSAI’s teaching of transferring page recency metadata between nodes during virtual machine migration, to realize, with a reasonable expectation of success, a system that performs VM migration, and transmits metadata to the destination node, as in RAMANATHAN, across multiple rounds, as in GUPTA. A person having ordinary skill would have been motivated to make this combination so that initial degradation of VM performance due to local cache being unpopulated after migration is avoided (GUPTA [0003]). Regarding claim 6, while RAMANATHAN and TSAI discuss transferring of page recency metadata, RAMANTHAN and TSAI do not explicitly teach: transmissions of one or more subsets of the retrieved page recency metadata are interleaved in between transmissions of the contents of the one or more pages of the computing process. However, in analogous art, that similarly discusses virtual machine migration, GUPTA teaches: transmissions of one or more subsets of the retrieved page recency metadata are interleaved in between transmissions of the contents of the one or more pages of the computing process ([0025] The target cache state migration application 208 may request the source cache state migration application 210 to send pre-fetch hints. Based on receiving the notification, the source cache state migration application 210 sends metadata regarding pre-fetch hints about data stored in the source cache 214 (e.g., pointers to data locations in the shared storage 206) to the target cache state migration application 208 executing on the target compute node 204. The target cache state migration application 208 pre-fetches data blocks from the shared storage 206 based on these initial hints and stores the pre-fetched data blocks in the target cache 218. As used herein, the term “block” refers to a group of bits that are retrieved and written to as a unit. [0026] In an embodiment, the source cache state migration application 210 sends one or more rounds of additional pre-fetch hints (also referred to herein as intermediate pre-fetch hints) to the target cache state migration application 208. The sending of these additional pre-fetch hints may cause the target cache state migration application 208 to remove some of the previously pre-fetched blocks from the target cache 218 (e.g., if the source cache state migration application 210 indicates that they are “cold” or otherwise no longer required to be cached). In addition, the sending of additional pre-fetch hints may cause the target cache state migration application 208 to invalidate and re-fetch some of the pre-fetched blocks in the target cache 218 if, for example, they have been overwritten since the last round of hints from the source cache state migration application 210. Further, the sending of additional pre-fetch hints to the target cache state migration application 208 may cause new data blocks to be fetched from the shared storage 206 and stored in the target cache 218 (e.g., the source cache state migration application 210 indicates that the data blocks are “hot” (i.e., metadata of the pre-fetch hints are indicative of how recently data has been accessed (hot or cold) or whether the data has been recently overwritten, and therefore represents “page recency metadata”) and belong in the target cache 218 (i.e., transmission of the pre-fetch hints are alternated, or interleaved, with fetching, or transmission of the data page blocks in each round)). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined GUPTA’s teaching of transferring page recency metadata between source and target nodes in multiple rounds interleaved with fetching data when migrating virtual machines, with RAMANATHAN and TSAI’s teaching of transferring page recency metadata between nodes during virtual machine migration, to realize, with a reasonable expectation of success, a system that performs VM migration, and transmits metadata to the destination node, as in RAMANATHAN, interleaved across multiple rounds with data fetching, as in GUPTA. A person having ordinary skill would have been motivated to make this combination so that initial degradation of VM performance due to local cache being unpopulated after migration is avoided (GUPTA [0003]). Regarding claims 13-14, and 21-22, they comprise limitations similar to those of claims 5-6, and are therefore rejected at least for similar rationale. Claims 25-29 are rejected under 35 U.S.C. 103 as being unpatentable over RAMANATHAN, in view of TSAI, as applied to claims 1, 9, and 17 above, and in further view of KEGEL et al. Pub. No.: US 2014/0181461 A1 (hereafter KEGEL). Regarding claim 25, while RAMANATHAN and TSAI discuss determining page recency metadata, they do not explicitly teach: the page recency data being derived from operation of at least one of: a least recently used algorithm, a least frequently used algorithm, an adaptive replacement cache algorithm, a low inter-reference recency set replacement algorithm, and a least recently/frequently used algorithm. However, in analogous art that similarly teaches determining page recency metadata, KEGEL teaches: the page recency data being derived from operation of at least one of: a least recently used algorithm, a least frequently used algorithm, an adaptive replacement cache algorithm, a low inter-reference recency set replacement algorithm, and a least recently/frequently used algorithm ([0004] The access and dirty bits are defined in the page table entries (PTEs) of guest and host page tables to record when the processor reads access bits from memory and writes dirty bits to memory as described by the PTE. This allows the operating system (OS) and hypervisor to implement least recently used (LRU) algorithms to find unused pages, and to find dirty pages to write out to a stable store. The use of access and dirty bits requires the host operating system (OS), (e.g., native OS or hypervisor), and guest operating systems to perform an exhaustive search (i.e., scan) of the page tables to determine which pages were used in the previous period. This information may be used to calculate the use-rate to identify unused or least-used pages to discard when there is memory pressure). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined KEGEL’s teaching of using LRU algorithms to find dirty pages, with the combination of RAMANATHAN and TSAI’s teaching of determining dirty page tables, to realize, with a reasonable expectation of success, a system that determines dirty page tables, as in RAMANATHAN and TSAI, using a LRU algorithm, as in KEGEL. A person having ordinary skill would have been motivated to make this combination to better identify least used pages (KEGEL [0004]). Regarding claim 26, while RAMANATHAN and TSAI discuss determining page recency metadata, they do not explicitly teach: wherein the page recency metadata is derived from page access frequency information. However, in analogous art that similarly teaches determining page recency metadata, KEGEL teaches: wherein the page recency metadata is derived from page access frequency information ([0004] The access and dirty bits are defined in the page table entries (PTEs) of guest and host page tables to record when the processor reads access bits from memory and writes dirty bits to memory as described by the PTE. This allows the operating system (OS) and hypervisor to implement least recently used (LRU) algorithms to find unused pages, and to find dirty pages to write out to a stable store. The use of access and dirty bits requires the host operating system (OS), (e.g., native OS or hypervisor), and guest operating systems to perform an exhaustive search (i.e., scan) of the page tables to determine which pages were used in the previous period. This information may be used to calculate the use-rate (i.e., “frequency”) to identify unused or least-used pages to discard when there is memory pressure). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined KEGEL’s teaching of using algorithms derived from access frequency to find dirty pages, with the combination of RAMANATHAN and TSAI’s teaching of determining dirty page tables, to realize, with a reasonable expectation of success, a system that determines dirty page tables, as in RAMANATHAN and TSAI, using an algorithm based on access frequency, as in KEGEL. A person having ordinary skill would have been motivated to make this combination to better identify least used pages (KEGEL [0004]). Regarding claim 27, while RAMANATHAN and TSAI discuss determining page recency metadata, they do not explicitly teach: page recency metadata for respective pages is based on respective amounts of time since respective most recent accesses of respective pages However, in analogous art that similarly discusses determining page recency metadata, KEGEL teaches: page recency metadata for respective pages is based on respective amounts of time since respective most recent accesses of respective pages ([0004] The access and dirty bits are defined in the page table entries (PTEs) of guest and host page tables to record when the processor reads access bits from memory and writes dirty bits to memory as described by the PTE. This allows the operating system (OS) and hypervisor to implement least recently used (LRU) algorithms (i.e., LRU determines a relative shortest time since latest access) to find unused pages, and to find dirty pages to write out to a stable store. The use of access and dirty bits requires the host operating system (OS), (e.g., native OS or hypervisor), and guest operating systems to perform an exhaustive search (i.e., scan) of the page tables to determine which pages were used in the previous period. This information may be used to calculate the use-rate to identify unused or least-used pages to discard when there is memory pressure). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to have combined KEGEL’s teaching of using algorithms derived from amounts of time since last access to find dirty pages, with the combination of RAMANATHAN and TSAI’s teaching of determining dirty page tables, to realize, with a reasonable expectation of success, a system that determines dirty page tables, as in RAMANATHAN and TSAI, using an algorithm based on time since last access, as in KEGEL. A person having ordinary skill would have been motivated to make this combination to better identify least used pages (KEGEL [0004]). Regarding claim 28, KEGEL further teaches: the page recency metadata indicating when a page was most recently accessed relative to when other pages were most recently accessed ([0004] The access and dirty bits are defined in the page table entries (PTEs) of guest and host page tables to record when the processor reads access bits from memory and writes dirty bits to memory as described by the PTE. This allows the operating system (OS) and hypervisor to implement least recently used (LRU) algorithms (i.e., LRU determines a relative least amount of time since latest access between memory pages) to find unused pages, and to find dirty pages to write out to a stable store. The use of access and dirty bits requires the host operating system (OS), (e.g., native OS or hypervisor), and guest operating systems to perform an exhaustive search (i.e., scan) of the page tables to determine which pages were used in the previous period. This information may be used to calculate the use-rate to identify unused or least-used pages to discard when there is memory pressure). Regarding claim 29, KEGEL further teaches: the page recency metadata indicating respective most recent respective accesses of respective pages of the computing process within pre-determined windows of time ([0004] The access and dirty bits are defined in the page table entries (PTEs) of guest and host page tables to record when the processor reads access bits from memory and writes dirty bits to memory as described by the PTE. This allows the operating system (OS) and hypervisor to implement least recently used (LRU) algorithms to find unused pages, and to find dirty pages to write out to a stable store. The use of access and dirty bits requires the host operating system (OS), (e.g., native OS or hypervisor), and guest operating systems to perform an exhaustive search (i.e., scan) of the page tables to determine which pages were used in the previous period (i.e., each period represents a “window of time”). This information may be used to calculate the use-rate to identify unused or least-used pages to discard when there is memory pressure). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL W AYERS whose telephone number is (571)272-6420. The examiner can normally be reached M-F 8:30-5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee Li can be reached at (571) 272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL W AYERS/ Primary Examiner, Art Unit 2195
Read full office action

Prosecution Timeline

Oct 28, 2022
Application Filed
Aug 13, 2025
Non-Final Rejection — §103
Nov 06, 2025
Applicant Interview (Telephonic)
Nov 12, 2025
Examiner Interview Summary
Nov 17, 2025
Response Filed
Jan 13, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12547446
Computing Device Control of a Job Execution Environment Based on Performance Regret of Thread Lifecycle Policies
2y 5m to grant Granted Feb 10, 2026
Patent 12498950
SIGNAL PROCESSING DEVICE AND DISPLAY APPARATUS FOR VEHICLE USING SHARED MEMORY TO TRANSMIT ETHERNET AND CONTROLLER AREA NETWORK DATA BETWEEN VIRTUAL MACHINES
2y 5m to grant Granted Dec 16, 2025
Patent 12493497
DETECTION AND HANDLING OF EXCESSIVE RESOURCE USAGE IN A DISTRIBUTED COMPUTING ENVIRONMENT
2y 5m to grant Granted Dec 09, 2025
Patent 12461768
CONFIGURING METRIC COLLECTION BASED ON APPLICATION INFORMATION
2y 5m to grant Granted Nov 04, 2025
Patent 12423149
LOCK-FREE WORK-STEALING THREAD SCHEDULER
2y 5m to grant Granted Sep 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+56.2%)
3y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 287 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month