DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
Information disclosure statement (IDS) submitted on 03/20/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Response to Arguments
Applicant's arguments filed on 10/28/2025 have been fully considered but they are not persuasive.
Regarding the inventor’s name of the prior arts, examiner used pe2e search report which provided the names as used. Anyway, examiner updated the inventor names as is in the final publication and is identified by the applicant. This should have no impact in the rejection since the article/patent number remained same.
Regarding applicant’s arguments, applicant did not provide arguments about the teachings of the prior art that maps the teachings in the instant claims.
Applicant argues that RDMA is a mechanism that bypasses the kernel stack, going only through the network cards. Thus, the host OS cannot have any knowledge of the access.
Examiner disagrees. Sadashiv [0011] and [0012] teaches hypervisor 111 running on top of a host operating system which itself runs on hardware platform 102. In such an embodiment, hypervisor 111 operates above an abstraction level provided by the host operating system. Sadashiv [0018] teaches VMs are powered-on (i.e., instantiated), hypervisor 111 creates an in-memory file system for each of the VMs in this memory transfer region, and communicates with other hosts in the cluster to create RDMA queue pairs. An RDMA queue pair includes a send queue and a receive queue. The send queue includes a pointer to a memory region from which data are sent and the receive queue includes a pointer to a memory region into which data will be received. For example, when a VM is instantiated in a host, a pointer to the in-memory file system that the hypervisor created for the VM and from which data will be sent is placed in the send queue, and in each of the other hosts in the cluster, a pointer to the memory region for receiving the data is placed in the receive queue. So, hypervisor running on OS is aware of data transfer and manages it. Sadashiv [0019-0022] teaches data transfer from failed host (host 201) to a failover host (host 202) using RDMA which involves running panic code by failed host and hypervisor of failover host copying data to the new memory region and reconstructing the page tables. Hence hypervisor/OS of the host copying the data is aware of the data transfer.
Applicant argues that improper combination based on changing the principle of operation cannot be cured by combination with another reference that is silent as to either DMA or RDMA.
Examiner disagrees. Using RDMA does not change the principle of operation as explained above.
Claim Status
Claims 1, 4-11, 13-21 are pending
Claims 1, 4-11, 13-21 are rejected under 35 USC § 103
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 5, 6, 7, 8, 11, 14, 15, 16, 17, 19 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over SADASHIV et al. (US 20230019814 A1)[Sadashiv] in view of BYUN (US 20210117122 A1) [Byun] in view of Burugula et al. (US 20060288187 A1)[Burugula]
Regarding Claim 1 Sadashiv discloses:
An apparatus comprising: a hardware interface to couple to a memory, the memory having a first region and a second region, and the hardware interface capable to establish a direct memory access (DMA) to the memory for a peripheral device coupled to the memory for pages allocate for DMA by a host operating system (OS) (Sadashiv: [0009] FIG. 1 teaches a computer system having computer hardware components such as central processing units (CPUs) 104, random access memory (RAM) 106 used as system memory, one or more network interface controllers (NICs) 108 for connecting to a network, and one or more host bus adapters (HBAs) 110 for connecting to a storage system. Sadashiv: [0010] teaches NICs 108 including functionality to support RDMA transport protocols for transferring/migrating data from one region to another. Combination of NIC and HBA works as an interface to communicate with other peripheral devices including memory region of another computer system. Sadashiv: [0019] FIG. 2, teaches system memory of host 201 having memory region 231 and memory region 232 being part of memory region 1 and system memory of host 202 having memory region 241 and memory region 242 being part of memory region 2. Examiner considers combination of host 201 and host 202 as one computing system with multiple computing nodes and a distributed shared memory 106 having region 1 attached to host 201 and region 2 attached to host 202.); circuitry capable to: migrate a pinned DMA page from the first region to the second region in response to transactional memory instructions, and maintain the direct access to the memory by the peripheral device to the [pinned DMA] page during migration of the [pinned DMA] page from the first region to the second region (Sadashiv: Claims 1: A method of migrating a virtual compute instance from a first host computer to a second host computer using remote direct memory access (RDMA), the first host computer including a first network interface controller (NIC) and a first system memory having a first memory region allocated for memory transfer, and the second host computer including a second NIC and a second system memory having a second memory region allocated for memory transfer. Migration implies/involves maintaining access to source region and destination region so that data can be accessed and copied from the source and written to the destination.).
Sadashiv teaches claim 1 in a distributed computing system having multiple computing nodes and a distributed shared memory system. To cover the case of a computing system having single computing node and a single memory system examiner is adding Byun.
Byun discloses:
a hardware interface to couple to a memory, the memory having a first region and a second region, and the hardware interface capable to establish a direct memory access (DMA) to the memory for a peripheral device coupled to the memory for pages allocate for DMA by a host operating system (OS); circuitry capable to: migrate a pinned DMA page from the first region to the second region in response to transactional memory instructions, and maintain the direct access to the memory by the peripheral device to the [pinned DMA] page during migration of the [pinned DMA] page from the first region to the second region (Byun: [0005] teaches In an embodiment, A memory system, comprising: a nonvolatile memory including a first region and a second region; and a controller configured to manage a migration operation for a plurality of memory blocks included in the first region and the second region, wherein the controller comprises: a migration module configured to perform the migration operation by selecting one or more victim blocks based on a number of valid pages of each memory block included in the first region when there is no free storage space in the first region, selecting one or more destination blocks in the second region that respectively correspond to the number of victim blocks, and swapping type information of each of the one or more victim blocks in the first region for type information of a corresponding one of the one or more destination blocks in the second region.).
Both Sadashiv and Byun represent works within the same Sadashiv of endeavor, namely information processing devices focusing data storage and retrieval operations. It would therefore have been obvious to one of ordinary skill in the art before the claimed invention was effectively filed to apply Sadashiv in view of Byun as it represents a combination of known prior art elements according to known methods (memory page migration system of Sadashiv using a system with a single non-volatile memory having two regions as used in Byun) to develop a more comprehensive memory system leading to a more robust computing system (see also Byun [0005]).
Sadashiv/Byun teaches all the limitation of claim 1. However, Sadashiv/Byun did not exclusively disclose using pinned page.
Burugula discloses:
The apparatus of claim 1, wherein the page is a pinned page in the first region of the memory (Burugula: [0012] teaches a method and system for efficiently migrating in-use small pages to enable promotion of contiguous small pages into large
pages in a memory environment that includes small pages pinned to real memory and/or and small pages mapped to direct memory access (DMA) within real memory. Teaches enabling coalescing contiguous small virtual memory pages to create large virtual memory pages by migrating in-use small memory pages including those that are pinned and/or mapped to DMA.).
Both Sadashiv/Byun and Burugula represent works within the same field of endeavor, namely information processing devices focusing data storage and retrieval operations. It would therefore have been obvious to one of ordinary skill in the art before the claimed invention was effectively filed to apply Sadashiv/Byun in view of Burugula as it represents a combination of known prior art elements according to known methods (memory page migration system of Sadashiv/Byun using pinned page as used in Burugula) to develop a more efficient memory system leading to a more efficient computing system (see also Burugula [0012]).
Regarding claim 11, this is a system claim corresponding to the apparatus claim 1, and is rejected for the same reasons mutatis mutandis.
Regarding claim 19, this is a method claim corresponding to the apparatus claim 1, and is rejected for the same reasons mutatis mutandis.
Regarding claim 5 Sadashiv/Byun/Burugula discloses: The apparatus of claim 1, wherein the circuitry comprises a plurality of registers and the circuitry is to store in the plurality of registers host physical addresses of the page in the first region of the memory and host physical addresses of another page in the second region of the memory (Sadashiv: [0010] teaches NICs 108 including functionality to support RDMA transport protocols for transferring/migrating data from one region to another. Combination of NIC and HBA works as an interface to communicate with other peripheral devices including memory region of another computer system. NIC's ability to transfer data from one memory region to another memory region involves retaining and using their addresses and the logic/circuit that retains the address being used for data transfers are similar to registers. Byun: abstract, [0005] teaches a controller includes a migration module configured to perform the migration operation by selecting one or more victim blocks based on a number of valid pages of each memory block included in the first region when there is no free storage space in the first region, selecting one or more destination blocks in the second region that respectively correspond to the number of victim blocks. Migrating pages involves copying data and involves using addresses of that data and the logic/circuit that holds the address of the data to be migrated is similar to a register.).
Regarding claim 14, this is a system claim corresponding to the apparatus claim 5, and is rejected for the same reasons mutatis mutandis.
Regarding claim 6 Sadashiv/Byun/Burugula discloses: The apparatus of claim 1, wherein the circuitry is to connect the peripheral device to the memory, and an IO memory management unit (IOMMU) page table is to translate a guest physical address of the peripheral device to a host physical address in the first region of the memory (Sadashiv: [0020-0021] teaches copying of the VM1 pages tables into memory region 231 copying of the VM2 pages tables into memory region 232. After the page tables have been copied into memory regions 231, 232, NIC 108 of host 202, performs a single-sided RDMA read operation with reference to the established queue pairs to transfer the contents of memory region 231 into memory region 241 and transfer the contents of memory region 232 into memory region 242 without involving the CPU of host 201. As a result, the VM1 page tables and the VM2 pages tables are now resident in memory regions of host 202. After the page tables have been copied over, NIC 108 of host 202 performs additional single-sided RDMA read operations to transfer data pages of VM1 and VM2 from their locations in system memory of host 201 to the memory transfer region of host 202. The single-sided RDMA read operations specify the locations of the data pages of VM1 in the system memory of host 201 determined from the VM1 page tables transferred into memory region 241 and the locations of the data pages of VM2 in the system memory of host 201 determined from the VM2 page tables transferred into memory region 242. Hence NIC is similar to IOMMU which handles data transfer from memory of one host to memory of another host. Byun: [0049-0050] teaches in order to determine the validity of each page, the GC/WL 42 may identify a logical address recorded in an out-of-band (00B) area of each page and then compare an actual address of the page with an actual address mapped to a logical address obtained from the inquiry request of the MM 44 and a mapping table is updated through the update of the MM 44. The MM 44 manages a logical-physical mapping table (similar to a page table) and process requests such as inquires and updates generated by the HRM 46 and the GC/WL 42. The MM 44 stores the entire mapping table in a flash memory and cache mapping items according to the capacity of the memory 144. Migration module is similar to the IOMMU.).
Regarding claim 15, this is a system claim corresponding to the apparatus claim 6, and is rejected for the same reasons mutatis mutandis.
Regarding claim 7 Sadashiv/Byun/Burugula discloses: The apparatus of claim 6, wherein the circuitry is to update the IOMMU page table, wherein update includes replacement of the host physical address of the first region of the memory with another host physical address of the second region of the memory (Sadashiv: [0024] teaches after all contents of the data pages of the protected VM have been transferred and copied into new locations in its system memory, the failover host at step 348 reconstructing the page tables of the protected VM to reference the new locations in the system memory thereof into which the data pages of the protected VM have been copied, and at step 350 writes the reconstructed page tables to the system memory thereof. Byun: [0051] teaches MM 44 (similar to page table) performing the map update when the latest mapping table still indicates a previous actual address, thereby ensuring accuracy.).
Regarding claim 16, this is a system claim corresponding to the apparatus claim 7, and is rejected for the same reasons mutatis mutandis.
Regarding claim 8 Sadashiv/Byun/Burugula discloses: The apparatus of claim 1, wherein the page comprises a first page, and wherein in response to a write command to store data in the first page in the first region, the circuitry capable to: gain a first access to the first page in the first region and a second access to a second page in the second region of the memory, and write data to the first page and the second page (Sadashiv: [0021], FIG. 2 teaches two memory regions in RAM 106 having VM1 data pages and VM2 data pages. While writing data to the memory it is one of the possibilities that first access to first page is in VM1 data page location i.e. region 1 and the second access is to a second page in the VM2 data page location i.e. region 2. Byun: [0044] teaches the controller 130 writes data in the memory device 150. It includes possibilities of receiving a write request write data in the first region (1501A) and a subsequent request to write data to the second region (1501B). Examiner finds no invention in writing to first page in first region and to second page in second region other than it being one of the possibilities of writing data.).
Regarding claim 17, this is a system claim corresponding to the apparatus claim 8, and is rejected for the same reasons mutatis mutandis.
Regarding claim 20, this is a method claim corresponding to the apparatus claim 8, and is rejected for the same reasons mutatis mutandis.
Claims 4, 13 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over SADASHIV et al. (US 20230019814 A1)[Sadashiv] in view of BYUN (US 20210117122 A1) [Byun] in view of Burugula et al. (US 20060288187 A1)[Burugula] in view of HYSER et al. (US 8910152 B1)[Hyser]
Regarding claim 4 Sadashiv/Byun/Burugula teaches all the limitation of claim 1. However, Sadashiv/Byun/Burugula did not explicitly teach 'transaction memory instruction' that is moving region of memory during runtime.
Hyser discloses: The apparatus of claim 1, wherein the circuitry is capable to use transactional memory to migrate data from the first region of the memory to the second region of the memory, the transactional memory having a concurrency control mechanism to control access to shared memory in concurrent computing (Spec defines 'transaction memory instruction' as moving region of memory during runtime. Spec: [0014] “Moving a region of memory, the current region, to another region, the target region, during runtime can be referred to as hot-remove”. spec: [0033] “In one example, interconnect 220 implements transaction memory instructions to hot-remove and migrate data”. Spec: [0015] "the operating system uses transactional memory to migrate the memory content from the current region to the target region. Transactional memory is the concurrency control mechanism for controlling access to shared memory in concurrent computing." So, as per the spec [0014-0015, 0033] 'Transactional memory is the concurrency control mechanism' and 'Moving a region of memory, to another region, during runtime is referred to as hot-remove'. Hence teaching hot-remove is similar to teaching - 'transactional memory having a concurrency control mechanism to control access to shared memory in concurrent computing'. Hyser: claim 1: A method of migrating a virtual machine from a first physical machine to a second physical machine, comprising: in response to an indication that the virtual machine is to be migrated, issuing, in the first physical machine, a hot-remove event notification to an operating system of the virtual machine, wherein the hot-remove event notification is a notification generated by a virtual machine monitor that a hardware resource of the first physical machine is being hot removed; and after issuing the hot-remove event notification, performing, by the first physical machine, migration of the virtual machine to the second physical machine.)
Both Sadashiv/Byun/Burugula and Hyser represent works within the same field of endeavor, namely information processing devices focusing data storage and retrieval operations. It would therefore have been obvious to one of ordinary skill in the art before the claimed invention was effectively filed to apply Sadashiv/Byun/Burugula in view of Hyser as it represents a combination of known prior art elements according to known methods (memory page migration system of Sadashiv/Byun/Burugula moving/migrating region of memory during runtime as used in Hyser) to develop a more efficient memory system leading to a more efficient computing system (see also Hyser claim 1).
Regarding claim 13, this is a system claim corresponding to the apparatus claim 4, and is rejected for the same reasons mutatis mutandis.
Regarding claim 21, Sadashiv/Byun/Burugula teaches all the limitation of claim 19. However, Sadashiv/Byun/Burugula did not explicitly teach 'transaction memory instruction' that is moving region of memory during runtime.
Hyser discloses:
The method of claim 19, wherein migrating the page comprises using transactional memory to migrate data from the first region of the memory to the second region of the memory, the transactional memory having a concurrency control mechanism to control access to shared memory in concurrent computing (Spec defines 'transaction memory instruction' as moving region of memory during runtime. Spec: [0014] “Moving a region of memory, the current region, to another region, the target region, during runtime can be referred to as hot-remove”. spec: [0033] “In one example, interconnect 220 implements transaction memory instructions to hot-remove and migrate data”. Spec: [0015] "the operating system uses transactional memory to migrate the memory content from the current region to the target region. Transactional memory is the concurrency control mechanism for controlling access to shared memory in concurrent computing." So, as per the spec [0014-0015, 0033] 'Transactional memory is the concurrency control mechanism' and 'Moving a region of memory, to another region, during runtime is referred to as hot-remove'. Hence teaching hot-remove is similar to teaching - 'transactional memory having a concurrency control mechanism to control access to shared memory in concurrent computing'. Hyser: claim 1: A method of migrating a virtual machine from a first physical machine to a second physical machine, comprising: in response to an indication that the virtual machine is to be migrated, issuing, in the first physical machine, a hot-remove event notification to an operating system of the virtual machine, wherein the hot-remove event notification is a notification generated by a virtual machine monitor that a hardware resource of the first physical machine is being hot removed; and after issuing the hot-remove event notification, performing, by the first physical machine, migration of the virtual machine to the second physical machine.).
The reasons for obviousness regarding claim 21 are same as those applied to claim 4 above.
Claims 9 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over SADASHIV et al. (US 20230019814 A1)[Sadashiv] in view of BYUN (US 20210117122 A1) [Byun] in view of Burugula et al. (US 20060288187 A1)[Burugula] in view of Venkatraman et al. (US 20190278513 A1)
Regarding claim 9 Sadashiv/Byun/Burugula discloses all the limitation of claim 8. However, Sadashiv/Byun/Burugula did not explicitly disclose write command including a peripheral component interconnect express (PCle) memory write packet.
Venkatraman discloses: The apparatus of claim 8, wherein the write command includes a peripheral component interconnect express (PCle) memory write packet (Venkatraman: [0061] teaches a peripheral component interconnect express (PCIe) host where the storage media controller is a PCIe storage media controller and a media write generator which is arranged to generate memory write transaction layer packets (TLP), with a commit field in a header of each TLP, to provide the data chunks of the media slices to be written into the storage media, and the associated commit indicators to the PCIe storage media controller.).
Regarding claim 18, this is a system claim corresponding to the apparatus claim 9, and is rejected for the same reasons mutatis mutandis.
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over SADASHIV et al. (US 20230019814 A1)[Sadashiv] in view of BYUN (US 20210117122 A1) [Byun] in view of Burugula et al. (US 20060288187 A1)[Burugula] in view of Deshpande (US 20180089083 A1)
Regarding claim 10 Sadashiv/Byun/Burugula discloses all the limitation of claim 8. However, Sadashiv/Byun/Burugula did not explicitly disclose the first access and the second access being coherent accesses.
Deshpande discloses: The apparatus of claim 8, wherein the first access and the second access are coherent access (Regarding coherent access spec defines, spec: [0044] ... "In one example, PCle block 375 gains coherence ownership of a line to the memory address by issuing an internal RdOwnNoData (or RdOwn) command on its coherent interface. Coherence ownership of a line to a memory address is an exclusive access that prevents any other access to that memory address. Coherence ownership can be referred to as coherent access." Deshpande: [0024] teaches the coherence interface state controller being configured to set the proxy monitor indicator in response to the first exclusive access being a coherent access transaction and issuance of the first exclusive access by the first processor to the coherency interconnection. The coherence interface state controller being configured to reset the proxy monitor indicator in response to successful write to the first memory address. The first state information includes an exclusive-write-ready indicator. The coherence interface state controller being configured to set the exclusive-write-ready indicator in response to the coherence interface state controller receiving indication of performance of the first exclusive access from the coherency interconnection. A second exclusive access can be carried out once the exclusive-write-ready indicator is set after the first exclusive access is done. The computing system includes a memory, a memory controller coupled to the memory, and the coherency interconnection coupled to the coherence interface and the memory controller. The coherency interconnection is configured to issue to the memory controller a selected exclusive access to the memory and it includes first, second, third etc. exclusive/coherent accesses.).
Both Sadashiv/Byun/Burugula and Deshpande represent works within the same field of endeavor, namely information processing devices focusing data storage and retrieval operations. It would therefore have been obvious to one of ordinary skill in the art before the claimed invention was effectively filed to apply Sadashiv/Byun/Burugula in view of Deshpande as it represents a combination of known prior art elements according to known methods (memory page migration system of Sadashiv/Byun/Burugula using coherent memory accesses as used in Deshpande) to develop a more efficient memory system leading to a more efficient computing system (see also Deshpande [0024]).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMAD S HASAN whose telephone number is (571)270-1737 and email address is mohammad.hasan@uspto.gov. The examiner can normally be reached on Mon-Fri 8-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tim Vo can be reached on 571-272-3642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/M.S.H/Examiner, Art Unit 2138
/SHAWN X GU/
Primary Examiner, AU2138