DETAILED ACTION
1. This Office Action is taken in response to Applicants’ Amendments and Remarks filed on 2/19/2026 regarding application 18/460,608 filed on 9/6/2023.
Claims 1, 4-10, 13, 16-17, and 21-23 are pending for consideration.
2. Response to Amendments and Remarks
Applicants’ amendments and remarks have been fully and carefully considered, with the Examiner’s response set forth below.
(1) In response to the amendments and remarks, an updated claim analysis has been made with newly identified reference(s). Refer to the corresponding sections of the following Office Action for details.
3. Examiner’s Note
(1) In the case of amending the Claimed invention, Applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention. This will assist in expediting compact prosecution. MPEP 714.02 recites: “Applicant should also specifically point out the support for any amendments made to the disclosure. See MPEP § 2163.06. An amendment which does not comply with the provisions of 37 CFR 1.121(b), (c), (d), and (h) may be held not fully responsive. See MPEP § 714.” Amendments not pointing to specific support in the disclosure may be deemed as not complying with provisions of 37 C.F.R. 1.131(b), (c), (d), and (h) and therefore held not fully responsive. Generic statements such as “Applicants believe no new matter has been introduced” may be deemed insufficient.
(2) Examiner has cited particular columns/paragraph and line numbers in the references applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
4. Claim 23 is rejected because it recites the limitation “the processing unit in the processor resource pool comprises ...” There is insufficient antecedent basis for the element “the processor resource pool” recited in the claim.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
5. Claim 1, 4-7, 9, 13, 16-17, and 21-23 are rejected under 35 U.S.C. 103 as being unpatentable over Davis et al. (US Patent 10,725,957, hereinafter Davis), and in view of Nation et al. (US Patent Application Number 2010/0161879, hereinafter Nation).
As to claim 1, Davis teaches A computer device [as shown in figures 1-9; Nation also teaches this limitation – as shown in figure 3] comprising:
multiple processing units, each processing unit of the multiple processing units comprising a processor, a core in a processor, or a combination of cores in a processor [for examples, multiple System on Chips (SoC), figure 1, 102-108; figure 5 further shows that each SoC (500) has two processors (502, 504);
Nation also teaches this limitation – as shown in figure 3, multiple processing units (310a, 310b, 310c), each with a processor (315a, 315b, 315c)];
a memory sharing control device [the corresponding “memory sharing control device”, for example, is the memory agent (MA), figure 2A, 206; figure 6 further shows the details of a memory agent (600), which includes multiple memory controllers (628-634); In conventional multi-socket server computers, each socket generally includes a system on chip (SoC) comprising multiple processors and an integrated memory controller to access memory coupled to the respective SoC … (c2 L35-43); In some embodiments, the memory agent (MA) 206 may include one or more memory controllers to communicate with the memory 208. In some implementations, the MA 206 may include a first memory controller to manage a first bank or portion of the memory 208 for the first SoC 202, and a second memory controller to manage a second bank or portion of the memory 208 for the second SoC 204 … (c8 L15-41);
Nation also teaches this limitation – as shown in figure 3, where the memory sharing control device comprising multiple network interface controllers (325a, 325b, 325c) and a network switch (340)], coupled to the multiple processing units [as shown in figures 1-3, 5, and 6; as shown in figures 2A, 2B, and 3, where the SoCs are coupled to the Memory Agent (MA);
Nation also teaches this limitation – as shown in figure 3]; and
a memory pool coupled to the memory sharing control device, the memory pool comprising multiple memories [memory, figure 6, 636, which includes multiple memory modules (636a-636d);
Nation also teaches this limitation – as shown in figure 3, where the memory pool (350) includes multiple memory appliances (352-358); In some cases, one or more memory appliances in accordance with embodiments of the present invention may be deployed in a rack of servers, or in a data center filled with racks of servers. In such a case, the memory appliance(s) may be configured as a common pool of memory that may be partitioned dynamically to serve as a memory resource for multiple compute platforms (e.g., servers). By sharing a common central resource, the overall system power demand may be lowered and the overall requirement for memory may be lowered. In some cases, such resource sharing allows for more efficient use of available memory (¶ 0041)] and a plurality of virtual memory devices [Davis teaches virtual machines are allocated virtual memory space and execute virtual functions -- Embodiments can provide uniform latency and bandwidth for each SoC to all of the memory by having MAs that are coupled to portions of the memory, external to the SoCs and by coupling each of the MAs directly to the SoCs using a respective SerDes link. Such an architecture can allow an application or a virtual machine (VM) to rely on always having uniform latency to a physical address X in the memory regardless of the SoC (i.e., the processor in the respective SoC) the application or the VM may be executing on … (c4 L37-48); … The functions of an SR-IOV-capable storage adapter device may be classified as physical functions (PFs) or virtual functions (VFs). Physical functions are fully featured functions of the device that can be discovered, managed, and manipulated. Physical functions have configuration resources that can be used to configure or control the storage adapter device. Physical functions include the same configuration address space and memory address space that a non-virtualized device would have. A physical function may have a number of virtual functions associated with it. Virtual functions are similar to physical functions, but are light-weight functions that may generally lack configuration resources, and are generally controlled by the configuration of their underlying physical functions. Each of the physical functions and/or virtual functions may be assigned to a respective thread of execution (such as for example, a virtual machine) running on a host device (c29 L4-20);
Nation more expressively teaches the aspect of virtual memory devices -- A memory appliance may be used in accordance with some embodiments of the present invention to support memory for multiple servers. In some cases, one or more of the multiple servers may be virtualized as is known in the art. In such a case the memory appliance may virtualize the memory ranges offered and managed by the appliance … memory appliances in accordance with different embodiments of the present invention may be employed in relation to a modified kernel environment that can treat a virtual memory swap as a memory page move to and/or from a particular memory device … Turning to FIG. 4, a block diagram of a memory appliance 400 is depicted in accordance with various embodiments of the present invention. As shown, memory appliance 400 includes a number of memory banks 410 each accessible via a memory controller 420. To manage the mapping of global virtualized addresses to physical addresses in memory banks 410 … (¶ 0053-0054); … By programming configuration registers 440, memory appliance 400 can be programmed to operate as the main memory for a number of different virtual machines, with provisioned physical memory spaces assigned to respective virtual machines … large contiguous ranges of the physical memory do not have to be available for mapping to the virtual machine memory spaces … (¶ 0057); Where the request is to allocate memory one or more virtual machines (block 1035), one of the configuration entries from the configuration registers is selected to include the configuration information associated with a particular virtual machine (block 1010) … A configuration corresponding to the specific request is written to the selected configuration entry by writing the VMID, the base address of the virtual machine memory space that is to be supported by the memory appliance, and the range of memory extending from the base address (block 1015) … (¶ 0082)], the memory sharing control device dynamically adjusting a correspondence between an allocated memory address of an allocated memory of the memory pool and one or more allocated processing units [In some embodiments, the memory agent (MA) 206 may include one or more memory controllers to communicate with the memory 208 … the MA 206 may be memory mapped to a portion of a respective physical address range for a processor of each SoC … (c8 L15-41); In some instances, when a processor of the first SoC 202 has to perform a write or read access to the memory 208, the processor may provide an address corresponding to a portion of a memory address range which may be mapped to the MA 206 … (c9 L20-67);
Nation more expressively teaches this limitation -- Yet further embodiments of the present invention provide memory appliances that include a bank of randomly accessible memory, and a memory controller … the size of the first memory region and the size of the second memory region are dynamically allocated by the memory controller (¶ 0015); In some cases, one or more memory appliances in accordance with embodiments of the present invention may be deployed in a rack of servers, or in a data center filled with racks of servers. In such a case, the memory appliance(s) may be configured as a common pool of memory that may be partitioned dynamically to serve as a memory resource for multiple compute platforms (e.g., servers). By sharing a common central resource, the overall system power demand may be lowered and the overall requirement for memory may be lowered. In some cases, such resource sharing allows for more efficient use of available memory. Various embodiments of the present invention provide for dynamically partitioning and sharing memory in a centralized memory appliance.… (¶ 0041-0042); Memory controller 420 is responsible for mapping the real address space represented by configuration entries 450 into a physical address space in memory banks 410. In addition, when an access to a real address space is requested, memory controller 420 is responsible for calculating the physical address that corresponds to the requested real address. To do this, memory controller 420 maintains a dynamic memory map table 460. Dynamic memory map table 460 includes a number of physical entries 470 that identify particular blocks of physical memory in memory banks 410 … Virtual machine identification 472 identifies a virtual machine to which the physical memory associated with the respective physical entry 470 is assigned or allocated … (¶ 0058-0062)];
the memory sharing control device comprising an interface component connected to the multiple processing units via a serial bus, the interface component configured to receive memory access requests from the multiple processing units sent via the serial bus [A plurality of system on chips (SoCs) in a server computer can be coupled to a plurality of memory agents (MAs) via respective Serializer/Deserializer (SerDes) interfaces … (abstract); … In certain embodiments, a plurality of system on chips (SoCs) can be coupled to a plurality of memory agents (MAs) to provide uniform accesses to respective memory attached to each of the MAs. Each SoC can include one or more processors, and can communicate with each MA using a respective point-to-point interconnect. For example, the point-to-point interconnect may include a Serializer/Deserializer (SerDes) link or any suitable high speed serial link. In certain embodiments, each SoC, once configured, can have a dedicated SerDes interface and a dedicated serial link to communicate with each of the Mas … (c3 L35-67); … Each SerDes interface may also include a serial-in-parallel-out (SIPO) module to de-serialize the data (e.g., convert data from a serial interface into a parallel interface) received over the communication link. The communication link may be a serial link that may be referred to as a SerDes link … (c4 L16-21);
Nation also teaches the memory sharing control device comprising an interface component connected to the multiple processing units – as shown in figure 3], and converting the memory access requests into parallel memory access requests [… Each SerDes interface may also include a serial-in-parallel-out (SIPO) module to de-serialize the data (e.g., convert data from a serial interface into a parallel interface) received over the communication link. The communication link may be a serial link that may be referred to as a SerDes link … (c4 L16-21)]; and
a control unit connected to the interface component [as shown in figure 6, where multiple destination controllers (610-616) are connected to the respective interface components (602-608)] and configured to allocate the allocated memory in the memory pool to the one or more allocated processing units [as shown in figure 6, where multiple memory controllers (628-634) are connected to the respective memory modules (636a-636d); In some embodiments, the memory agent (MA) 206 may include one or more memory controllers to communicate with the memory 208. In some implementations, the MA 206 may include a first memory controller to manage a first bank or portion of the memory 208 for the first SoC 202, and a second memory controller to manage a second bank or portion of the memory 208 for the second SoC 204 … (c8 L15-41); The memory mapping module 506a may be configured to perform mapping of various memory agents to respective portions of address ranges. For example, the memory mapping module 506a may perform mapping of a first MA to a first portion of a memory address range for the first processor 502 and the second processor 504. Similarly, the memory mapping module 506a may perform mapping of a second MA to a second portion of the memory address range for the first processor 502 and the second processor 504, and so on. In some instances, when the memory capacity is increased by adding more memory modules, the memory mapping module 506a can also allow remapping of the MAs to support larger memory address range (c16 L65 to c17 L10);
Nation more teaches this limitation – as shown in figures 3-7; Memory controller 420 is responsible for mapping the real address space represented by configuration entries 450 into a physical address space in memory banks 410. In addition, when an access to a real address space is requested, memory controller 420 is responsible for calculating the physical address that corresponds to the requested real address. To do this, memory controller 420 maintains a dynamic memory map table 460. Dynamic memory map table 460 includes a number of physical entries 470 that identify particular blocks of physical memory in memory banks 410 … Virtual machine identification 472 identifies a virtual machine to which the physical memory associated with the respective physical entry 470 is assigned or allocated … (¶ 0058-0062)];
a cache stage connected to the control unit [Each of the first processor 502 and the second processor 504 may include one or more processing cores or processing logic to execute instructions stored in the computer readable medium 506, such as a processor cache (not shown). The computer readable medium 506 may be non-transitory and may be in the form of a cache, buffer, or memory and may be coupled to the processors and spread throughout the SoC in different configurations without deviating from the scope of the disclosure. In some implementations, the computer readable medium 506 may include a memory mapping module 506a and a configuration module 506b (c15 L32-42);
Nation also teaches this limitation – as shown in figure 3, cache (320a, 320b, 320c); figure 5c, DRAM cache (544) and cache controller (542); … In some cases, cache memory 320 is implemented on the same package as processor 315. Further, in some cases, cache memory 320 may be implemented as a multi-level cache memory as is known in the art … (¶ 0049)]; and
a plurality of memory controllers connected in an upstream direction to the cache stage [as shown in figures 5 and 6; multiple memory source controllers (figure 5, 510-518); multiple memory controllers (figure 6, 628-634); Each of the first processor 502 and the second processor 504 may include one or more processing cores or processing logic to execute instructions stored in the computer readable medium 506, such as a processor cache (not shown). The computer readable medium 506 may be non-transitory and may be in the form of a cache, buffer, or memory and may be coupled to the processors and spread throughout the SoC in different configurations without deviating from the scope of the disclosure. In some implementations, the computer readable medium 506 may include a memory mapping module 506a and a configuration module 506b (c15 L32-42);
Nation also teaches this limitation – as shown in figure 3, cache (320a, 320b, 320c); figure 5c, DRAM cache (544) and cache controller (542); … In some cases, cache memory 320 is implemented on the same package as processor 315. Further, in some cases, cache memory 320 may be implemented as a multi-level cache memory as is known in the art … (¶ 0049)] and connected in a downstream direction to the memories and the plurality of virtual memory devices in the memory pool [as shown in figures 5 and 6; a memory pool (figure 6, 636) with multiple memory devices (636a-636d);
Nation more expressively teaches the aspect of virtual memory devices -- A memory appliance may be used in accordance with some embodiments of the present invention to support memory for multiple servers. In some cases, one or more of the multiple servers may be virtualized as is known in the art. In such a case the memory appliance may virtualize the memory ranges offered and managed by the appliance … memory appliances in accordance with different embodiments of the present invention may be employed in relation to a modified kernel environment that can treat a virtual memory swap as a memory page move to and/or from a particular memory device … Turning to FIG. 4, a block diagram of a memory appliance 400 is depicted in accordance with various embodiments of the present invention. As shown, memory appliance 400 includes a number of memory banks 410 each accessible via a memory controller 420. To manage the mapping of global virtualized addresses to physical addresses in memory banks 410 … (¶ 0053-0054); … By programming configuration registers 440, memory appliance 400 can be programmed to operate as the main memory for a number of different virtual machines, with provisioned physical memory spaces assigned to respective virtual machines … large contiguous ranges of the physical memory do not have to be available for mapping to the virtual machine memory spaces … (¶ 0057); Where the request is to allocate memory one or more virtual machines (block 1035), one of the configuration entries from the configuration registers is selected to include the configuration information associated with a particular virtual machine (block 1010) … A configuration corresponding to the specific request is written to the selected configuration entry by writing the VMID, the base address of the virtual machine memory space that is to be supported by the memory appliance, and the range of memory extending from the base address (block 1015) … (¶ 0082)], the plurality of memory controllers is configured to receive the parallel memory access requests [… Each SerDes interface may also include a serial-in-parallel-out (SIPO) module to de-serialize the data (e.g., convert data from a serial interface into a parallel interface) received over the communication link … (c4 L16-20); … A SIPO module in the first MA SerDes interface of the MA 206 may receive serial data for the memory access request and de-serialize the data to a parallel interface. A first memory controller in the MA 206 may determine if the memory access request includes a write access to the memory 208 or a read access from the memory 208. The first memory controller can send the write access request with the write data and the address to the memory 208 via the memory channel 214 … (c9 L20-50)], and each memory controller of the plurality of memory controllers is configured to receive a parallel memory access request and access a corresponding memory in the memory pool [as shown in figure 6, where each of the memory controllers (628-634) is connected to a corresponding memory device (636a-636d); In some embodiments, the memory agent (MA) 206 may include one or more memory controllers to communicate with the memory 208. In some implementations, the MA 206 may include a first memory controller to manage a first bank or portion of the memory 208 for the first SoC 202, and a second memory controller to manage a second bank or portion of the memory 208 for the second SoC 204 … (c8 L15-42); In some instances, when a processor of the first SoC 202 has to perform a write or read access to the memory 208, the processor may provide an address corresponding to a portion of a memory address range which may be mapped to the MA 206. Based on the address, the memory access request may be directed to the first SoC SerDes interface of the first SoC 202 for sending to the MA 206 across the first communication link 210 … (c9 L20-50)].
Regarding claim 1, Davis teaches the memory sharing control device establishes a correspondence between an allocated memory address of an allocated memory of the memory pool and one or more allocated processing units [as shown in figure 6, where each of the memory controllers (628-634) is connected to a corresponding memory device (636a-636d); In some embodiments, the memory agent (MA) 206 may include one or more memory controllers to communicate with the memory 208. In some implementations, the MA 206 may include a first memory controller to manage a first bank or portion of the memory 208 for the first SoC 202, and a second memory controller to manage a second bank or portion of the memory 208 for the second SoC 204 … (c8 L15-42); In some instances, when a processor of the first SoC 202 has to perform a write or read access to the memory 208, the processor may provide an address corresponding to a portion of a memory address range which may be mapped to the MA 206 … (c9 L20-67)], but does not expressively teaches dynamically adjusting the correspondence.
However, Nation specifically teaches the memory sharing control device dynamically adjusting a correspondence between an allocated memory address of an allocated memory of the memory pool and one or more allocated processing units [Yet further embodiments of the present invention provide memory appliances that include a bank of randomly accessible memory, and a memory controller … the size of the first memory region and the size of the second memory region are dynamically allocated by the memory controller (¶ 0015); In some cases, one or more memory appliances in accordance with embodiments of the present invention may be deployed in a rack of servers, or in a data center filled with racks of servers. In such a case, the memory appliance(s) may be configured as a common pool of memory that may be partitioned dynamically to serve as a memory resource for multiple compute platforms (e.g., servers). By sharing a common central resource, the overall system power demand may be lowered and the overall requirement for memory may be lowered. In some cases, such resource sharing allows for more efficient use of available memory. Various embodiments of the present invention provide for dynamically partitioning and sharing memory in a centralized memory appliance.… (¶ 0041-0042); Memory controller 420 is responsible for mapping the real address space represented by configuration entries 450 into a physical address space in memory banks 410. In addition, when an access to a real address space is requested, memory controller 420 is responsible for calculating the physical address that corresponds to the requested real address. To do this, memory controller 420 maintains a dynamic memory map table 460. Dynamic memory map table 460 includes a number of physical entries 470 that identify particular blocks of physical memory in memory banks 410 … Virtual machine identification 472 identifies a virtual machine to which the physical memory associated with the respective physical entry 470 is assigned or allocated … (¶ 0058-0062)].
Therefore, it would have been obvious for one of ordinary skills in the art before the effective filing date of the claimed invention to dynamically adjust a correspondence between an allocated memory address of an allocated memory of the memory pool and one or more allocated processing units, as expressively demonstrated by Nation, and to incorporate it into the existing scheme disclosed by Davis, because Nation teaches doing this allows dynamically partitioning the common memory pool to serve multiple computer platforms, lowering the required power and resources [In some cases, one or more memory appliances in accordance with embodiments of the present invention may be deployed in a rack of servers, or in a data center filled with racks of servers. In such a case, the memory appliance(s) may be configured as a common pool of memory that may be partitioned dynamically to serve as a memory resource for multiple compute platforms (e.g., servers). By sharing a common central resource, the overall system power demand may be lowered and the overall requirement for memory may be lowered. In some cases, such resource sharing allows for more efficient use of available memory (¶ 0041)].
As to claim 4, Davis in view of Nation teaches The computer device according to claim 1, wherein the memory the memory controllers are for different memory types [Davis -- as shown in figure 3, type 1 memory (308) and type 2 memory (312)], and the control unit is further configured to: establish a correspondence between the allocated memory address of the allocated memory in the memory pool and the one or more allocated processing units when allocating the allocated memory to the one or more allocated processing unit [Davis -- as shown in figure 6, where each of the memory controllers (628-634) is connected to a corresponding memory device (636a-636d); In some embodiments, the memory agent (MA) 206 may include one or more memory controllers to communicate with the memory 208. In some implementations, the MA 206 may include a first memory controller to manage a first bank or portion of the memory 208 for the first SoC 202, and a second memory controller to manage a second bank or portion of the memory 208 for the second SoC 204 … (c8 L15-42); In some instances, when a processor of the first SoC 202 has to perform a write or read access to the memory 208, the processor may provide an address corresponding to a portion of a memory address range which may be mapped to the MA 206. Based on the address, the memory access request may be directed to the first SoC SerDes interface of the first SoC 202 for sending to the MA 206 across the first communication link 210 … (c9 L20-50);
Nation – as shown in figures 3-7; Yet further embodiments of the present invention provide memory appliances that include a bank of randomly accessible memory, and a memory controller … the size of the first memory region and the size of the second memory region are dynamically allocated by the memory controller (¶ 0015); In some cases, one or more memory appliances in accordance with embodiments of the present invention may be deployed in a rack of servers, or in a data center filled with racks of servers. In such a case, the memory appliance(s) may be configured as a common pool of memory that may be partitioned dynamically to serve as a memory resource for multiple compute platforms (e.g., servers). By sharing a common central resource, the overall system power demand may be lowered and the overall requirement for memory may be lowered. In some cases, such resource sharing allows for more efficient use of available memory. Various embodiments of the present invention provide for dynamically partitioning and sharing memory in a centralized memory appliance.… (¶ 0041-0042); Memory controller 420 is responsible for mapping the real address space represented by configuration entries 450 into a physical address space in memory banks 410. In addition, when an access to a real address space is requested, memory controller 420 is responsible for calculating the physical address that corresponds to the requested real address. To do this, memory controller 420 maintains a dynamic memory map table 460. Dynamic memory map table 460 includes a number of physical entries 470 that identify particular blocks of physical memory in memory banks 410 … Virtual machine identification 472 identifies a virtual machine to which the physical memory associated with the respective physical entry 470 is assigned or allocated … (¶ 0058-0062)].
As to claim 5, Davis in view of Nation teaches The computer device according to claim 4, wherein the control unit is configured to: virtualize a plurality of virtual memory devices from the memory pool, wherein a physical memory corresponding to a first virtual memory device in the plurality of virtual memory devices is the first memory; and allocate the first virtual memory device to the first processing unit [Davis -- Similarly, the second SoC 204 may perform a write or read access to the memory 208 by providing an address that may be mapped to a portion of a memory address range which may also be mapped to the MA 206. Note that the memory mapped address of the MA 206 for the second SoC 204 can be different than or the same as the memory mapped address of the MA 206 for the first SoC 202 … (c9 L51 to c10 L18); In some embodiments, each MA (e.g., the first MA 410, second MA 412, third MA 414, and the fourth MA 416) and each PMA (e.g. the first PMA 418, second PMA 420 and the third PMA 422) may be mapped to a certain respective portion of a memory address range for a processor of each of the first SoC 402, second SoC 404, third SoC 406, or the fourth SoC 408, based on a mapping function. In some embodiments, a cache line interleaving across the first MA 410, second MA 412, third MA 414, and the fourth MA 416 may be performed using the mapping function. For example, the mapping function may utilize a round robin algorithm, perform a hash of some of the address bits to determine mapping of the MAs to the respective memory address ranges, or utilize any other suitable technique without deviating from the scope of the disclosure … (c13 L57 to c14 L11);
Nation -- as shown in figures 4-7; A memory appliance may be used in accordance with some embodiments of the present invention to support memory for multiple servers. In some cases, one or more of the multiple servers may be virtualized as is known in the art. In such a case the memory appliance may virtualize the memory ranges offered and managed by the appliance … memory appliances in accordance with different embodiments of the present invention may be employed in relation to a modified kernel environment that can treat a virtual memory swap as a memory page move to and/or from a particular memory device … Turning to FIG. 4, a block diagram of a memory appliance 400 is depicted in accordance with various embodiments of the present invention. As shown, memory appliance 400 includes a number of memory banks 410 each accessible via a memory controller 420. To manage the mapping of global virtualized addresses to physical addresses in memory banks 410 … (¶ 0053-0054); … By programming configuration registers 440, memory appliance 400 can be programmed to operate as the main memory for a number of different virtual machines, with provisioned physical memory spaces assigned to respective virtual machines … large contiguous ranges of the physical memory do not have to be available for mapping to the virtual machine memory spaces … (¶ 0057); Where the request is to allocate memory one or more virtual machines (block 1035), one of the configuration entries from the configuration registers is selected to include the configuration information associated with a particular virtual machine (block 1010) … A configuration corresponding to the specific request is written to the selected configuration entry by writing the VMID, the base address of the virtual machine memory space that is to be supported by the memory appliance, and the range of memory extending from the base address (block 1015) … (¶ 0082)].
As to claim 6, Davis in view of Nation teaches The computer device according to claim 5, wherein the control unit is further configured to: cancel the correspondence between the first virtual memory device and the first processing unit when a preset condition is met; and establish a correspondence between the first virtual memory device and a second processing unit in the multiple processing units [Davis -- In some embodiments, each MA (e.g., the first MA 410, second MA 412, third MA 414, and the fourth MA 416) and each PMA (e.g. the first PMA 418, second PMA 420 and the third PMA 422) may be mapped to a certain respective portion of a memory address range for a processor of each of the first SoC 402, second SoC 404, third SoC 406, or the fourth SoC 408, based on a mapping function. In some embodiments, a cache line interleaving across the first MA 410, second MA 412, third MA 414, and the fourth MA 416 may be performed using the mapping function. For example, the mapping function may utilize a round robin algorithm, perform a hash of some of the address bits to determine mapping of the MAs to the respective memory address ranges, or utilize any other suitable technique without deviating from the scope of the disclosure … (c13 L57 to c14 L11);
Nation -- Yet further embodiments of the present invention provide memory appliances that include a bank of randomly accessible memory, and a memory controller … the size of the first memory region and the size of the second memory region are dynamically allocated by the memory controller (¶ 0015); In some cases, one or more memory appliances in accordance with embodiments of the present invention may be deployed in a rack of servers, or in a data center filled with racks of servers. In such a case, the memory appliance(s) may be configured as a common pool of memory that may be partitioned dynamically to serve as a memory resource for multiple compute platforms (e.g., servers). By sharing a common central resource, the overall system power demand may be lowered and the overall requirement for memory may be lowered. In some cases, such resource sharing allows for more efficient use of available memory. Various embodiments of the present invention provide for dynamically partitioning and sharing memory in a centralized memory appliance.… (¶ 0041-0042); Memory controller 420 is responsible for mapping the real address space represented by configuration entries 450 into a physical address space in memory banks 410. In addition, when an access to a real address space is requested, memory controller 420 is responsible for calculating the physical address that corresponds to the requested real address. To do this, memory controller 420 maintains a dynamic memory map table 460. Dynamic memory map table 460 includes a number of physical entries 470 that identify particular blocks of physical memory in memory banks 410 … Virtual machine identification 472 identifies a virtual machine to which the physical memory associated with the respective physical entry 470 is assigned or allocated … (¶ 0058-0062)].
As to claim 7, Davis in view of Nation teaches The computer device according to claim 1, wherein the cache stage is configured to: cache data read by one of the multiple processing units from the memory pool, or cache data evicted by said one of the multiple processing units [Nation – as shown in figure 3, cache (320a, 320b, 320c); figure 5c, DRAM cache (544) and cache controller (542); … In some cases, cache memory 320 is implemented on the same package as processor 315. Further, in some cases, cache memory 320 may be implemented as a multi-level cache memory as is known in the art … (¶ 0049); … The coherence control point (i.e., one of the virtual machines associated with the memory appliance, another virtual machine, or external memory controller) outside of the memory appliance is then responsible for maintaining any necessary directory of pointers to cached copies, invalidating cached copies, enforcing order, or the like … In such a case, all accesses to the memory appliance space come directly to the memory appliance and the memory appliance is responsible for maintaining a directory of pointers to cached copies, invalidating cache copies when necessary, enforcing order, and the like. In this case the memory appliance is acting much like a CC-NUMA controller as are known in the art would function … (¶ 0069)].
As to claim 9, Davis in view of Nation teaches The computer device according to claim 7, wherein the cache stage further comprises a quality of service (QoS) engine configured to implement optimized storage of the data that needs to be cached by said one of the multiple processing units in the cache stage [Davis teaches the aspect of QoS -- … In some instances, a packet may include a packet header and a packet payload. The packet header may include information associated with the packet, such as the source, destination, quality of service parameters, length, protocol, routing labels, error correction information, etc. … (c22 L37-59);
Nation -- as shown in figure 3, cache (320a, 320b, 320c); figure 5c, DRAM cache (544) and cache controller (542); … In some cases, cache memory 320 is implemented on the same package as processor 315. Further, in some cases, cache memory 320 may be implemented as a multi-level cache memory as is known in the art … (¶ 0049); … The coherence control point (i.e., one of the virtual machines associated with the memory appliance, another virtual machine, or external memory controller) outside of the memory appliance is then responsible for maintaining any necessary directory of pointers to cached copies, invalidating cached copies, enforcing order, or the like … In such a case, all accesses to the memory appliance space come directly to the memory appliance and the memory appliance is responsible for maintaining a directory of pointers to cached copies, invalidating cache copies when necessary, enforcing order, and the like. In this case the memory appliance is acting much like a CC-NUMA controller as are known in the art would function … (¶ 0069); … In some embodiments of the present invention, an operating system issuing requests to the memory appliance is designed to access a storage pool within the memory bank of the memory appliance thereby using the memory appliance as a high-performance cache in front of a traditional swap file, or in some cases, to supersede the swap file structure entirely … In such cases, pages can be addressed by a variety of means. For example, pages may be globally identified within a virtual machine associated with the memory appliance by the upper bits of the virtual address issued to the memory appliance. A store operation of a page to the page cache may include the store command itself, the virtual address of the page, and optionally the real address of the page within the virtual machine … (¶ 0087-0088)].
As to claim 13, it recites substantially the same limitations as in claim 1, and is rejected for the same reasons set forth in the analysis of claim 1. Refer to “As to claim 1” presented earlier in this Office Action for details.
As to claim 16, it recites substantially the same limitations as in claim 4, and is rejected for the same reasons set forth in the analysis of claim 4. Refer to “As to claim 4” presented earlier in this Office Action for details.
As to claim 17, it recites substantially the same limitations as in claim 5, and is rejected for the same reasons set forth in the analysis of claim 5. Refer to “As to claim 5” presented earlier in this Office Action for details.
As to claim 21, Davis in view of Nation teaches The computer device of claim 1, further comprising the memory sharing control device dynamically adjusting a correspondence between an allocated virtual memory address of an allocated virtual memory and one or more allocated processing units [Nation – as shown in figures 3-7; A memory appliance may be used in accordance with some embodiments of the present invention to support memory for multiple servers. In some cases, one or more of the multiple servers may be virtualized as is known in the art. In such a case the memory appliance may virtualize the memory ranges offered and managed by the appliance … memory appliances in accordance with different embodiments of the present invention may be employed in relation to a modified kernel environment that can treat a virtual memory swap as a memory page move to and/or from a particular memory device … Turning to FIG. 4, a block diagram of a memory appliance 400 is depicted in accordance with various embodiments of the present invention. As shown, memory appliance 400 includes a number of memory banks 410 each accessible via a memory controller 420. To manage the mapping of global virtualized addresses to physical addresses in memory banks 410 … (¶ 0053-0054); … By programming configuration registers 440, memory appliance 400 can be programmed to operate as the main memory for a number of different virtual machines, with provisioned physical memory spaces assigned to respective virtual machines … large contiguous ranges of the physical memory do not have to be available for mapping to the virtual machine memory spaces … (¶ 0057); Where the request is to allocate memory one or more virtual machines (block 1035), one of the configuration entries from the configuration registers is selected to include the configuration information associated with a particular virtual machine (block 1010) … A configuration corresponding to the specific request is written to the selected configuration entry by writing the VMID, the base address of the virtual machine memory space that is to be supported by the memory appliance, and the range of memory extending from the base address (block 1015) … (¶ 0082); Yet further embodiments of the present invention provide memory appliances that include a bank of randomly accessible memory, and a memory controller … the size of the first memory region and the size of the second memory region are dynamically allocated by the memory controller (¶ 0015); In some cases, one or more memory appliances in accordance with embodiments of the present invention may be deployed in a rack of servers, or in a data center filled with racks of servers. In such a case, the memory appliance(s) may be configured as a common pool of memory that may be partitioned dynamically to serve as a memory resource for multiple compute platforms (e.g., servers). By sharing a common central resource, the overall system power demand may be lowered and the overall requirement for memory may be lowered. In some cases, such resource sharing allows for more efficient use of available memory. Various embodiments of the present invention provide for dynamically partitioning and sharing memory in a centralized memory appliance.… (¶ 0041-0042); Memory controller 420 is responsible for mapping the real address space represented by configuration entries 450 into a physical address space in memory banks 410. In addition, when an access to a real address space is requested, memory controller 420 is responsible for calculating the physical address that corresponds to the requested real address. To do this, memory controller 420 maintains a dynamic memory map table 460. Dynamic memory map table 460 includes a number of physical entries 470 that identify particular blocks of physical memory in memory banks 410 … Virtual machine identification 472 identifies a virtual machine to which the physical memory associated with the respective physical entry 470 is assigned or allocated … (¶ 0058-0062)].
As to claim 22, it recites substantially the same limitations as in claim 21, and is rejected for the same reasons set forth in the analysis of claim 21. Refer to “As to claim 21” presented earlier in this Office Action for details.
As to claim 23, Davis in view of Nation teaches The computer device of claim 1, wherein the processing unit in the processor resource pool comprises a combination of different cores in a same processor, or comprises a combination of different cores in different processors [Davis -- The processing logic 802 may include application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), systems-on-chip (SoCs), network processing units (NPUs), processors configured to execute instructions or any other circuitry configured to perform logical arithmetic and floating point operations. Examples of processors that may be included in the processing logic 802 may include processors developed by ARM®, MIPS®, AMD®, Intel®, Qualcomm®, and the like. n certain implementations, processors may include multiple processing cores, wherein each processing core may be configured to execute instructions independently of the other processing cores … (c23 L12-42)].
6. Claims 8-10 are rejected under 35 U.S.C. 103 as being unpatentable over Davis in view of Nation, and further in view of Noll et al. (US Patent 10,891,234, hereinafter Noll).
Regarding claim 8, Davis in view of Nation does not teach a prefetch engine, and the prefetch engine is configured to: prefetch, from the memory pool, the data that needs to be read by said one of the multiple processing units, and cache the data in the cache unit.
However, using a prefetching engine to prefetch data before it is requested is well known and commonly used in the art to reduce data waiting time and access latency.
For example, Noll specifically teaches prefetching data and cache the data in the cache unit [Column scan may be run independently of any auxiliary data structures, such as dictionaries, hash tables, bit vectors, etc., and may read data from memory only once. According to some embodiments, the column scan may exploit data locality by processing each byte of a cache line. Thus, column scan may profit from a hardware prefetching, for example, such as where a CPU may load data of cache lines into cache before that data may be requested (c7 L51-58)].
Therefore, it would have been obvious for one of ordinary skills in the art before the effective filing date of the claimed invention to initiate a replication job to prefetch data and cache the data in the cache unit, as expressively demonstrated by Noll, and to incorporate it into the existing scheme disclosed by Davis in view of Nation, in order to reduce data waiting time and access latency.
As to claim 9, Davis in view of Nation & Noll teaches The computer device according to claim 7, wherein the cache stage further comprises a quality of service (QoS) engine configured to implement optimized storage of the data that needs to be cached by said one of the multiple processing units in the cache stage [Noll -- … Such a hardware feature may be referred to generally as a quality of service (QoS) feature for multi-core processors with shared cache. Some processors may refer to a specific instance of such a feature as Cache Allocation Technology (CAT), in some non-limiting example implementations, but other comparable technology may be used for QoS with shared cache … (c2 L61 to c3 L10); Kernel support for cache partitioning, which may be, in some embodiments, based at least in part on QoS technology for partitioning a shared cache, such as CAT … (c12 L1-10)].
As to claim 10, Davis in view of Nation & Noll teaches The computer device according to claim 1, wherein the memory sharing control device further comprises a compression/decompression engine, and the compression/decompression engine is configured to: compress or decompress data related to memory access [Noll -- In addition, using dictionaries, each column may be further compressed using different compression methods. If data needs to be decompressed during query processing, for example, for data projections or for construction of intermediate result(s), a corresponding dictionary then may be accessed frequently to look up the actual value, depending on actual data sets and processing requirements (c6 L47-53)].
Conclusion
7. Claims 1, 4-10, 13, 16-17, and 21-23 are rejected as explained above.
8. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE
MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
9. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHENG JEN TSAI whose telephone number is 571-272-4244. The examiner can normally be reached on Monday-Friday, 9-6.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Reginald Bragdon can be reached on 571-272-4204. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/SHENG JEN TSAI/Primary Examiner, Art Unit 2139