Prosecution Insights
Last updated: April 19, 2026
Application No. 18/807,682

PAGE TABLE HOOKS TO MEMORY TYPES

Non-Final OA §103§DP
Filed
Aug 16, 2024
Examiner
PAPERNO, NICHOLAS A
Art Unit
2132
Tech Center
2100 — Computer Architecture & Software
Assignee
Micron Technology, Inc.
OA Round
3 (Non-Final)
70%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
66%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
193 granted / 275 resolved
+15.2% vs TC avg
Minimal -4% lift
Without
With
+-3.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
21 currently pending
Career history
296
Total Applications
across all art units

Statute-Specific Performance

§101
3.0%
-37.0% vs TC avg
§103
60.4%
+20.4% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
14.2%
-25.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 275 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2/24/2026 has been entered. Response to Amendment The amendments filed 2/24/2026 have been accepted. Claims 1-18 are still pending. Claims 1, 12, and 18 are amended. Applicant’s amendments to the claims have overcome each and every 103 rejection previously set forth in the Final Office Action mailed 11/24/2025. Double Patenting Please note that MPEP § 804 states: “A complete response to a nonstatutory double patenting (NSDP) rejection is either a reply by applicant showing that the claims subject to the rejection are patentably distinct from the reference claims or the filing of a terminal disclaimer in accordance with 37 CFR 1.321 in the pending application(s) with a reply to the Office action (see MPEP § 1490 for a discussion of terminal disclaimers). Such a response is required even when the nonstatutory double patenting rejection is provisional. As filing a terminal disclaimer, or filing a showing that the claims subject to the rejection are patentably distinct from the reference application’s claims, is necessary for further consideration of the rejection of the claims, such a filing should not be held in abeyance. Only objections or requirements as to form not necessary for further consideration of the claims may be held in abeyance until allowable subject matter is indicated. Replies with an omission should be treated as provided in MPEP § 714.03. “ In accordance with MPEP § 804 and §714.03 the examiner will hold any response/amendments to this office action as NON-COMPLIANT without any additional extensions of time that do not contain: an approved terminal disclaimer, or a complete and concise explanation of how the inventions are patentably distinct from one another. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-18 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-19 and 1-22 of U.S. Patent No. 11,494,311 and 12,066,951 respectively. Although the claims at issue are not identical, they are not patentably distinct from each other because the independent claims of the instant application are broader versions of those presented in the patents and the dependent claims of the application are the same as those presented in Patent 12,066,951 and contain limitations presented in the independent claims of Patent 11,494,311 as well as some of the dependent claims. Instant Application Patent 11,494,311 Patent 12,066,951 1. (Currently Amended) A system comprising: a first memory device of a first memory type having a first latency; a second memory device of a second memory type having a second latency, wherein the second latency is different from the first latency; and at least one processing device configured to: monitor data usage by an application that is running on the processing device; and based on monitoring the data usage, transfer data for the application from a first address range associated with the first memory device to a second address range associated with the second memory device, wherein the data is transferred while the application is running. 3. The system of claim 1, further comprising a multiplexer within a memory module including the first and second memory devices, wherein the data for the application is transferred through the multiplexer. 14. A system comprising: a first memory device of a first memory type having a first latency; a second memory device of a second memory type having a second latency, wherein the second latency is less than the first latency; a page table storing page table entries that map virtual addresses to physical addresses in memory devices of different memory types, the memory devices including the first memory device and the second memory device; at least one processing device; metadata storing the first latency, the second latency, a first address range of the virtual addresses corresponding to the first memory type, and a second address range of the virtual addresses corresponding to the second memory type, the metadata being accessible by the at least one processing device; and memory containing instructions configured to instruct the at least one processing device to: monitor priorities of a first and a second application, wherein the priorities are based on data usage patterns by the first and the second application; access the metadata to obtain the first and the second address range and the first and the second latency; after determining that a priority of the first application is lower than a priority of the second application, assign the first application to use the first address range and the second application to use the second address range; and after determining that the priority of the first application has increased, transfer data for the first application from the first address range to the second address range through a multiplexer within a memory module including the first and second memory device. 1. A system comprising: a first memory device; a second memory device; and at least one processing device configured to: monitor data usage by an application; based on monitoring the data usage, transfer data for the application from a first address range associated with the first memory device to a second address range associated with the second memory device, wherein the data is transferred while the application is running; and a multiplexer within a memory module including the first and second memory devices, wherein the data for the application is transferred through the multiplexer. 2. The system of claim 1, wherein the pattern of usage is a frequency of read or write access to data. 13. The method of claim 12, wherein the data regarding the pattern of usage is at least one of a frequency of use or a time of last use. 2. The system of claim 1, wherein the data usage comprises a pattern of usage by the application. 4. The system of claim 3, wherein the processing device is further configured to: receive a read or write command from a host; and in response to receiving the read or write command, send at least one signal to control the multiplexer for transferring the data. 14. …transfer data for the first application from the first address range to the second address range through a multiplexer within a memory module including the first and second memory device. 5. The system of claim 1, wherein the processing device is further configured to: receive a read or write command from a host; and in response to receiving the read or write command, send at least one signal to control the multiplexer for transferring the data. 5. The system of claim 1, wherein the processing device is further configured to manage virtual pages stored in the first memory device and the second memory device, and the transferred data includes one or more of the virtual pages. 14. … a page table storing page table entries that map virtual addresses to physical addresses in memory devices of different memory types… 18. …wherein the virtual page is transferred from the first memory device to the second memory device by the memory management unit based on the updated mapping data. 6. The system of claim 1, wherein the processing device is further configured to manage virtual pages stored in the first memory device and the second memory device, and the transferred data includes one or more of the virtual pages. 6. The system of claim 1, wherein the second memory type is dynamic random access memory, and the first memory type is non-volatile random access memory or flash memory. 7. The method of claim 6, wherein the first memory type is dynamic random access memory, and the second memory type is non-volatile random access memory or flash memory. 7. The system of claim 1, wherein the second memory device is dynamic random access memory, and the first memory device is non-volatile random access memory or flash memory. 7. The system of claim 1, wherein the data for the application is transferred from the first address range to the second address range within a memory module containing the first and second address ranges. 14. …assign the first application to use the first address range and the second application to use the second address range; and after determining that the priority of the first application has increased, transfer data for the first application from the first address range to the second address range through a multiplexer within a memory module including the first and second memory device. 8. The system of claim 1, wherein the data for the application is transferred from the first address range to the second address range within a memory module containing the first and second address ranges. 8. The system of claim 1, wherein the processing device is further configured to update a page table in response to a change in allocation of memory to the application. 18. The system of claim 14, further comprising a memory management unit and a translation lookaside buffer, wherein the instructions are further configured to instruct the at least one processing device to: update mapping data in the translation lookaside buffer based on a change in memory type associated with the virtual page from the first memory type to the second memory type; wherein the virtual page is transferred from the first memory device to the second memory device by the memory management unit based on the updated mapping data. 9. The system of claim 1, wherein the processing device is further configured to update a page table in response to a change in allocation of memory to the application. 9. The system of claim 8, wherein the page table includes page table entries each associated with a respective virtual address and usage data, and wherein the respective usage data relates to read or write access at the respective virtual address. 10. The system of claim 9, wherein the page table includes page table entries each associated with a respective virtual address and usage data, and wherein the respective usage data relates to read or write access at the respective virtual address. 10. The system of claim 1, wherein the processing device is further configured to update a page table entry for a virtual page in a page table based on monitoring the data usage, and transferring the data comprises transferring the virtual page from the first memory device to the second memory device. 11. The system of claim 1, wherein the processing device is further configured to update a page table entry for a virtual page in a page table based on monitoring the data usage, and transferring the data comprises transferring the virtual page from the first memory device to the second memory device. 11. The system of claim 1, wherein: monitoring data usage generates usage data; the usage data includes data regarding read or write access to a virtual page; and a page table entry for the virtual page in a page table is updated by a memory management unit using the usage data. 12. The system of claim 1, wherein: monitoring data usage generates usage data; the usage data includes data regarding read or write access to a virtual page; and a page table entry for the virtual page in a page table is updated by a memory management unit using the usage data. 12. (Currently Amended) A system comprising: a first memory device of a first memory type; a second memory device of a second memory type; and at least one processing device configured to: assign a first application running on the system to use a first address range associated with the first memory device; monitor data usage of the first application while the first application is running; and based on monitoring the data usage, transfer data for the first application from the first address range to a second address range associated with the second memory device, wherein the first application continues running during and after the data transfer. 14. A system comprising: a first memory device of a first memory type having a first latency; a second memory device of a second memory type having a second latency, wherein the second latency is less than the first latency; a page table storing page table entries that map virtual addresses to physical addresses in memory devices of different memory types, the memory devices including the first memory device and the second memory device; at least one processing device; metadata storing the first latency, the second latency, a first address range of the virtual addresses corresponding to the first memory type, and a second address range of the virtual addresses corresponding to the second memory type, the metadata being accessible by the at least one processing device; and memory containing instructions configured to instruct the at least one processing device to: monitor priorities of a first and a second application, wherein the priorities are based on data usage patterns by the first and the second application; access the metadata to obtain the first and the second address range and the first and the second latency; after determining that a priority of the first application is lower than a priority of the second application, assign the first application to use the first address range and the second application to use the second address range; and after determining that the priority of the first application has increased, transfer data for the first application from the first address range to the second address range through a multiplexer within a memory module including the first and second memory device. 13. A system comprising: a first memory device of a first memory type; a second memory device of a second memory type; a multiplexer within a memory module including the first and second memory devices; and at least one processing device configured to: assign a first application running on the system to use a first address range associated with the first memory device; and transfer, through the multiplexer, data for the first application from the first address range to a second address range associated with the second memory device, wherein the first application continues running during and after the data transfer. 13. The system of claim 12, further comprising a page table storing page table entries that map virtual addresses to physical addresses in memory devices of different memory types, wherein a first page table entry corresponding to the transferred data is updated so that a virtual address maps to the second address range instead of the first address range. 14… page table storing page table entries that map virtual addresses to physical addresses in memory devices of different memory types, the memory devices including the first memory device and the second memory device… 14. The system of claim 13, further comprising a page table storing page table entries that map virtual addresses to physical addresses in memory devices of different memory types, wherein a first page table entry corresponding to the transferred data is updated so that a virtual address maps to the second address range instead of the first address range. 14. The system of claim 13, further comprising a memory management unit (MMU) configured to access the page table entries to determine physical addresses in memory accessed by the first application. 15. The system of claim 14, further comprising a memory management unit (MMU) configured to access the page table entries to determine physical addresses in memory accessed by the first application. 15. The system of claim 14, wherein each page table entry includes a memory type. 16. The system of claim 15, wherein each page table entry includes a memory type. 16. The system of claim 12, further comprising a memory management unit (MMU), wherein the transferred data includes at least one virtual page, and the data is transferred by the memory management unit based on an updated page table entry for the virtual page. 17. The system of claim 13, further comprising a memory management unit (MMU), wherein the transferred data includes at least one virtual page, and the data is transferred by the memory management unit based on an updated page table entry for the virtual page. 17. The system of claim 12, further comprising a memory management unit (MMU), wherein the data usage is a pattern of access to a virtual page of the first application managed by the MMU. 18. The system of claim 13, further comprising a memory management unit (MMU), wherein the first application accesses a virtual page managed by the MMU. 18. (Currently Amended) A method comprising: monitoring data usage by an application in a first memory device of a first memory type , wherein the application is running while the frequency of data usage is monitored; and based on monitoring frequency of the data usage, changing a main memory assignment of data for the application from a first address range associated with the first memory device to a second address range associated with a second memory device of a second memory type, wherein the main memory assignment is changed while the application is running. 1. A method comprising: associating, by at least one processing device, a virtual page with a first memory type having a first latency; generating a page table entry to map a virtual address of the virtual page to a physical address in a first memory device of the first memory type; storing, using the page table entry, the virtual page at the physical address in the first memory device; storing a first address range of the virtual page and the first latency in metadata; monitoring a priority of an application, wherein the priority is based on data usage patterns by the application; accessing the metadata to obtain the first address range and the first latency, and further to obtain a second address range and a corresponding second latency, the second latency being less than the first latency; assigning the application to use the first address range after determining that the first latency is appropriate for the application; and transferring data for the application from the first address range to the second address range through a multiplexer within a memory module containing the first and second address range after determining that the priority of the application has increased. 19. A method comprising: monitoring data usage by an application in a first memory device, wherein the application is running while the data usage is monitored; and based on monitoring the data usage, changing a main memory assignment of data for the application from a first address range associated with the first memory device to a second address range associated with a second memory device, wherein the main memory assignment is changed while the application is running, wherein the data for the application is transferred through a multiplexer within a memory module including the first and second memory devices. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 5, 6, and 8-18 are rejected under 35 U.S.C. 103 as being unpatentable over Klein (US PGPub 2014/0025923) in view of Ramini et al. (US PGPub 2020/0210333, hereafter referred to as Ramini) in view of Kusbel et al. (US PGPub 2018/0217778, hereafter referred to as Kusbel). Regarding claim 1, Klein teaches a system comprising: a first memory device of a first memory type having a first latency, a second memory device of a second memory type having a second latency (Fig. 1, 3, and 4 and Paragraphs [0026] and [0028], show the system contains multiple memories and that each virtual page (defined by their identified virtual address) is associated with type of memory as seen from the TYPE column 52 in the page tables (meaning there are multiple different types of memories). As latency is a characteristic of memories it means that each memory would have some kind of latency associated with it), and transfer data for the application from a first address range associated with the first memory device to a second address range associated with the second memory device (Paragraph [0035]-[0037], describe the method of performing a memory swap wherein an entry associated with one memory type can be changed to be associated with the other memory type. The data and entry are then swapped between the first and second memory type and the page table is updated to reflect the change. Paragraphs [0006] and [0019] shows that at least one of the memories in question can be main memory meaning the change can be a change in the main memory locations). Klein does not teach wherein the second latency is different from the first latency, and at least one processing device configured to: monitor data usage by an application that is running on the processing device, and based on monitoring the data usage, transfer data for the application, wherein the data is transferred while the application is running. Ramini teaches at least one processing device configured to: monitor data usage by an application that is running on the processing device, and based on monitoring the data usage, transfer data for the application, wherein the data is transferred while the application is running (Paragraph [0029], states that the data usage pattern of an application can be determined while a user interacts with it meaning it would be running. These patterns are used to determine caching policies which govern data movement to and from the cache. Paragraph [0030], states the purpose of this is to result in faster application execution meaning the application is running during and after the data transfers). Since both Klein and Ramini teach transferring data from one memory to another it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the prior art according to known methods by modifying the teachings of Klein to take into account applications’ usage patterns when determining where to store particular data as taught in Ramini to obtain the predictable result of having at least one processing device configured to: monitor data usage by an application that is running on the processing device, and based on monitoring the data usage, transfer data for the application, wherein the data is transferred while the application is running. Klein and Ramini do not teach wherein the second latency is different from the first latency. Kusbel teaches wherein the second latency is different from the first latency (Paragraph [0002], states that multiple memories with varying latency can be looked at when deciding what memory to use). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Klein and Ramini to use the latency as taught in Kusbel so as to intelligently handle data in a data storage server to provide a balance between data transfer latency and cost of data storage (Kusbel, Paragraph [0009]). Regarding claim 2, Klein, Ramini, and Kusbel teach all the limitations of claim 1. Klein further teaches wherein the pattern of usage is a frequency of read or write access to data (Paragraph [0035]-[0037], discusses the swapping of pages in one memory type to that of another memory type which involves determining the frequency of use of pages). The combination of and reason for combining are the same as those given in claim 1. Regarding claim 5, Klein, Ramini, and Kusbel teach all the limitations of claim 1. Klein further teaches wherein the processing device is further configured to manage virtual pages stored in the first memory device and the second memory device, and the transferred data includes one or more of the virtual pages (Fig. 3 and 4 and Paragraphs [0026] and [0028], show that each virtual page (defined by their identified virtual address) is associated with type of memory as seen from the TYPE column 52 in the page tables meaning that the virtual pages are stored in the respective memory devices. Paragraph [0035]-[0037], as stated in the rejection to claim 1, describes the process of swapping data which includes the virtual pages). The combination of and reason for combining are the same as those given in claim 1. Regarding claim 6, Klein, Ramini, and Kusbel teach all the limitations of claim 1. Klein further teaches wherein the second memory type is dynamic random access memory, and the first memory type is non-volatile random access memory or flash memory (Paragraph [0019]-[0020], states one or more of the memories can be DRAM and one or more of the other memories can be flash memory). The combination of and reason for combining are the same as those given in claim 1. Regarding claim 8, Klein, Ramini, and Kusbel teach all the limitations of claim 1. Klein further teaches wherein the processing device is further configured to update a page table in response to a change in allocation of memory to the application (Paragraph [0035]-[0037], describe the method of performing a memory swap wherein an entry associated with one memory type can be changed to be associated with the other memory type. The data and entry are then swapped between the first and second memory type and the page table is updated to reflect the change). The combination of and reason for combining are the same as those given in claim 1. Regarding claim 9, Klein, Ramini, and Kusbel teach all the limitations of claim 8. Klein further teaches wherein the page table includes page table entries each associated with a respective virtual address and usage data, and wherein the respective usage data relates to read or write access at the respective virtual address (Fig. 3 and 4 and Paragraphs [0026] and [0028], show the columns of the entries in the page table associated with the data. This is also stored in the TLB which is an address table which stores the address of data pages in a metadata table along with information about those pages. Fig. 3 and 4 and Paragraph [0026]-[0027], show the LRU column which indicates which entries are used the least and most). The combination of and reason for combining are the same as those given in claim 1. Regarding claim 10, Klein, Ramini, and Kusbel teach all the limitations of claim 1. Klein further teaches wherein the processing device is further configured to update a page table entry for a virtual page in a page table based on monitoring the data usage, wherein the processing device is further configured to update a page table entry for a virtual page in a page table based on monitoring the data usage (Paragraph [0026]-[0027], show the LRU column which indicates which entries are used the least and most which also means that this information is updated based on the usage of the virtual pages). The combination of and reason for combining are the same as those given in claim 1. Regarding claim 11, Klein, Ramini, and Kusbel teach all the limitations of claim 1. Ramini further teaches wherein: monitoring data usage generates usage data (Paragraphs [0029]-[0030], as stated in the rejection to claim 1, since the usage patterns can be tracked and used for caching policies it means that usage data would be generated to be used for the caching policies). Klein further teaches the usage data includes data regarding read or write access to a virtual page and a page table entry for the virtual page in a page table is updated by a memory management unit using the usage data (Paragraph [0026]-[0027], as stated in the rejection to claim 10, the LRU data (usage data regarding read and write accesses) can be stored in a page table entry). The combination of and reason for combining are the same as those given in claim 1. Regarding claims 12 and 13, claims 12 and 13 are the system claims associated with claims 1, 9, and 10. Since Klein, Ramini, and Kusbel teach all the limitations to claims 1, 9, and 10 and Ramini further teaches assign a first application to use a first address range associated with the first memory device (Fig. 3 and 8 and Paragraphs [0096] and [0105]), they also teach all the limitations to claims 12 and 13; therefore the rejection to claims 1, 9, and 10 also apply to claims 12 and 13. Regarding claim 14, Klein, Ramini, and Kusbel teach all the limitations to claim 13. Klein further teaches a memory management unit (MMU) configured to access the page table entries to determine physical addresses in memory accessed by the first application (Fig. 2 and Paragraph [0018] and [0022], describes the memory management system (memory management unit) that stores and uses page tables to translate virtual addresses to physical addresses). The combination of and reason for combining are the same as those given in claim 1. Regarding claim 15, Klein, Ramini, and Kusbel teach all the limitations to claim 14. Klein further teaches wherein each page table entry includes a memory type (Fig. 3 and 4 and Paragraphs [0026] and [0028], show that each virtual page (defined by their identified virtual address) is associated with type of memory as seen from the TYPE column 52 in the page tables). The combination of and reason for combining are the same as those given in claim 1. Regarding claim 16, Klein, Ramini, and Kusbel teach all the limitations to claim 12. Klein further teaches a memory management unit (MMU) (Fig. 2 and Paragraph [0018] and [0022], as stated in the rejection to claim 14), wherein the transferred data includes at least one virtual page, and the data is transferred by the memory management unit based on an updated page table entry for the virtual page (Fig. 3 and 4 and Paragraphs [0026] and [0028], show that each virtual page (defined by their identified virtual address) is associated with type of memory as seen from the TYPE column 52 in the page tables meaning that the virtual pages are stored in the respective memory devices. Paragraph [0035]-[0037], as stated in the rejection to claim 1, describes the process of swapping data which includes the virtual pages). The combination of and reason for combining are the same as those given in claim 1. Regarding claim 17, Klein, Ramini, and Kusbel teach all the limitations to claim 12. Klein further teaches a memory management unit (MMU) (Fig. 2 and Paragraph [0018] and [0022], as stated in the rejection to claim 14), wherein the data usage is a pattern of access to a virtual page of the first application managed by the MMU (Paragraph [0026]-[0027], as stated in the rejection to claim 10, the LRU data (usage data regarding read and write accesses) can be stored in a page table entry which is managed by the memory management system). The combination of and reason for combining are the same as those given in claim 1. Regarding claim 18, claim 18 is the method claim associated with claim 12. Since Klein, Ramini, and Kusbel teach all the limitations to claim 12 and Klein further teaches main memory assignment (Paragraphs [0006], [0019], and [0021], states the memory management system does not distinguish between memory types including main memory, meaning that the data transfers can be to and from main memory), they also teach all the limitations of claim 18; therefore the rejection to claim 12 also applies to claim 18. Claims 3, 4, and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Klein, Ramini, and Kusbel as applied to claim 1 above, and further in view of Pax et al. (US PGPub 2017/0206036, hereafter referred to as Pax). Regarding claim 3, Klein, Ramini, and Kusbel teach all the limitations to claim 1. Klein, Ramini, and Kusbel do not teach a multiplexer within a memory module including the first and second memory devices, wherein the data for the application is transferred through the multiplexer. Pax teaches a multiplexer within a memory module including the first and second memory devices, wherein the data for the application is transferred through the multiplexer (Fig. 1 and Paragraphs [0010] and [0013], show the memory module which includes a non-volatile memory and several DRAMs as well as several data multiplexers and a control and address multiplexer which are used to facilitate data transfer to and from the host and memory devices). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Klein, Ramini, and Kusbel to utilize the memory architecture of Pax so to extend chip kill functionality such that the host system can identify failed non-volatile memory devices, restore corrupted data resulting from the non-volatile memory device failure, and in certain examples, continue to operate the system until the non-volatile memory device can be replaced (Pax, Paragraph [0009]). Regarding claim 4, Klein, Ramini, Kusbel, and Pax teach all the limitations to claim 3. Pax further teaches wherein the processing device is further configured to: receive a read or write command from a host, and in response to receiving the read or write command, send at least one signal to control the multiplexer for transferring the data (Fig. 1 and Paragraphs [0010] and [0013], as stated in the rejection to claim 3, the multiplexers facilitate the data transfer between the memories and the host meaning signals would need to be sent to them for them to operate. Paragraph [0022] states the host can send memory requests (read and write commands) to the memory devices). The combination of and reason for combining are the same as those given in claim 3. Regarding claim 7, Klein, Ramini, and Kusbel teach all the limitations to claim 1. Klein further teaches wherein the data for the application is transferred from the first address range to the second address range (Paragraph [0035]-[0037], as stated in the rejection to claim 1). Klein, Ramini, and Kusbel do not teach a memory module containing the first and second address ranges. Pax teaches a memory module containing the first and second address ranges (Fig. 1 and Paragraphs [0010] and [0013], shows the memory module which contains multiple different memories which have address ranges as indicated by the address busses and control information). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Klein, Ramini, and Kusbel to utilize the memory architecture of Pax so to extend chip kill functionality such that the host system can identify failed non-volatile memory devices, restore corrupted data resulting from the non-volatile memory device failure, and in certain examples, continue to operate the system until the non-volatile memory device can be replaced (Pax, Paragraph [0009]). Response to Arguments Applicant’s arguments with respect to claims have been considered but are moot because the applicant amended the claims with the limitation “…wherein the data is transferred while the application is running” to overcome the prior rejections set forth in the Final Rejection mailed 11/24/2025. To address this, new reference Ramini has been incorporated into the rejection to address the amended limitations. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS A PAPERNO whose telephone number is (571)272-8337. The examiner can normally be reached Mon-Fri 9:30-5 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hosain Alam can be reached at 571-272-3978. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NICHOLAS A. PAPERNO/Examiner, Art Unit 2132
Read full office action

Prosecution Timeline

Aug 16, 2024
Application Filed
Aug 07, 2025
Non-Final Rejection — §103, §DP
Nov 11, 2025
Response Filed
Nov 20, 2025
Final Rejection — §103, §DP
Jan 26, 2026
Response after Non-Final Action
Feb 24, 2026
Request for Continued Examination
Mar 07, 2026
Response after Non-Final Action
Mar 20, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602314
MEMORY EXPANSION METHOD AND RELATED DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12585580
TECHNIQUES FOR A FRAGMENT CURSOR
2y 5m to grant Granted Mar 24, 2026
Patent 12585406
WRITING AND READING DATA SETS TO AND FROM CLOUD STORAGE FOR LEGACY MAINFRAME APPLICATIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12578884
DYNAMIC ONLINE CODE-RATE ALLOCATION ACCORDING TO WORDLINE NOISE FOR ADAPATIVE ECC IN SSD/UFS
2y 5m to grant Granted Mar 17, 2026
Patent 12578904
METHOD FOR HANDLING ACCESS COMMANDS WITH MATCHING AND UNMATCHING ADDRESSES AND SOLID-STATE STORAGE DEVICE OPERATING THE SAME
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
70%
Grant Probability
66%
With Interview (-3.8%)
2y 5m
Median Time to Grant
High
PTA Risk
Based on 275 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month