DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Status
Claims 1-18 are pending
Claims 1-18 are rejected under 35 USC § 103
Information Disclosure Statement
No IDS was submitted with this amendment and hence nothing was considered.
Response to Arguments
Applicant's arguments filed on 09/30/2025 have been fully considered but they are not persuasive and some arguments are moot due to addition of new prior arts.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 6, 7, 8, 9 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over DROPPS et al. (DE 102020108666 B4) [Dropps] in view of Davda et al. (US 20140281056 A1) further in view of YI et al. (JP 2013196696 A)[Yi]
Regarding claim 1 Dropps disclose: A method for a computer system having a plurality of hosts and a switch, comprising: using a first central processing unit (CPU) in a first host of the plurality of hosts to send memory request information according to a storage space required for executing a task (Dropps: abstract: teaches a controller and a controller memory configured to: receive a first request containing an address of a first memory block of a plurality of memory blocks (m01-m06, m15-m23, m50-m52, m77-m79, m91, m92) in a shared memory (107sm) accessible by a plurality of processors (105-1 - 105-4),wherein the shared memory (107sm) contains a first memory device (107-1) connected to a first processor (105-1) of the plurality of processors (105-1 - 105-4),wherein the first memory (107-1) and the first processor (105-1) are part of a computer node (n1)); using a first cache coherence device in the first host and the switch to forward the memory request information to a second host of the plurality of hosts, so as to request the second host to allocate partial space in a memory to the first CPU for use, wherein the second host comprises a second cache coherence device, [and the second cache coherence device is configured to provide a real-time capacity of an idle space in a storage space of the memory to the first host according to a detection result] (Dropps: abstract: teaches a controller and a controller memory wherein each individual memory block of the plurality of memory blocks (m01-m06, m15-m23, m50-m52, m77-m79, m91, m92) are each associated with a storage category of a plurality of storage categories, the plurality of storage categories comprising different cache coherence protocols for managing cache coherence for corresponding memory blocks (m01-m06, m15-m23, m50-m52, m77-m79, m91, m92) and determines whether a storage category associated with the first memory block is a first storage category or a second storage category where the first storage category comprises a first cache coherence protocol using a coherence directory comprising state and ownership information stored in the first memory (107-1) of the computer node (n1), and the second storage category comprises a second cache coherence protocol using a coherence directory stored in the controller memory of the controller (105-1) but not in the first memory (107-1) of the computer node (n1); and in response to determining that the storage category associated with the first memory block is the second storage category, determining whether the coherence directory stored in the controller memory of the controller (105-1) contains state and ownership information corresponding to the address included in the first request and sending a response to the first request based on state and ownership information corresponding to the address included in the first request storage category of the first memory block. Dropps: [0057], Fig. 1 teaches node controller 103-1 receiving a request for a memory block in one of its corresponding memories, triggered by a thread executing a processor. The request may be a remote request, meaning that it is received from the node controller 103-1 from one of the other node controllers (e. g. ,B. the node controller 103-2) on behalf of its local processors (and/or from one of the other processors themselves) of the system 100. Receiving request from processors in controller 103-2 for allocating memory block in node-controller 1 is similar to host 1 (controller 103-2) requesting memory allocation in host 2 (controller 103-1).); in response to allocating the partial space in the memory of the second host, using the second cache coherence device and the switch to provide a physical address of the partial space to the first cache coherence device [for translation to generate a translated physical address] and accessing the partial space by the first CPU using the translated physical address ; and accessing the partial space by the first CPU using the translated physical address (Dropps: [0033] teaches the node controllers 103 (and thus the processors 105) communicating with each other using switched connections. The fabric structure 101 (similar to a switch) is used to transfer data and/or messages between or among one or more of the node controllers 103 and/or processors 105. Such communications include requests to read or write memory or cache blocks, in which case the node controllers 103 facilitate cache coherence via multiple concurrently implemented cache coherence protocols for each type of memory category. Facilitating read/write requests implies completing the request and hence accessing the location addressed by the read/write request).
Dropp teaches the claim limitations as stated above. However, Dropp did not explicitly disclose second cache coherence device of second host providing physical address to the first cache coherence device of host 1 which translates the address to a translated physical address.
Davda discloses:
in response to allocating the partial space in the memory of the second host, using the second cache coherence device and the switch to provide a physical address of the partial space to the first cache coherence device for translation to generate a translated physical address; and accessing the partial space by the first CPU using the translated physical address (Davada: [0018-0021] teaches a first address translation for translating a guest physical address of the second descriptor to a host physical address of the second descriptor to be prefetched and cached before the second received data is to be received and the second DMA operation is to be performed. Teaches a second address translation for translating a guest physical address of the second buffer to a host physical address of the second buffer to be prefetched and cached before the second received data is to be received and the second DMA operation is to be performed. The guest physical address is similar to the one sent by second host that allocated and provided memory space and the translated host physical address is similar to the one translated by first cache coherence device part of the first host. DMA operation initiated to access the second buffer and translating a guest physical address of the second buffer to a host physical address of the second buffer implies accessing the translated-addressed location.);
Both Dropp and Davda represent works within the same field of endeavor, namely information processing devices focusing data storage and retrieval operations. It would therefore have been obvious to one of ordinary skill in the art before the claimed invention was effectively filed to apply Dropp in view of Davda as it represents a combination of known prior art elements according to known methods (Multiprocessing system of Dropp translating address sent by second host to the first host (host requesting memory allocation) as used in Davda) to develop a better shared storage system leading to a more efficient computing system (see also Davda [0018-0021]).
Dropp/Davda teaches a host requesting storage space for executing a task and teaches first and second cache coherence device. However, Dropp/Davda did not explicitly disclose second host functioning as second cache coherence device providing real-time capacity of idle space.
Yi discloses:
using a first cache coherence device in the first host and the switch to forward the memory request information to a second host of the plurality of hosts, so as to request the second host to allocate partial space in a memory to the first CPU for use, wherein the second host comprises a second cache coherence device, and the second cache coherence device is configured to provide a real-time capacity of an idle space in a storage space of the memory to the first host according to a detection result (Yi: Espacenet translated version (attached as a pdf file) [0011-0012] teaches a detection module 200 detecting the idle memory capacity of each virtual machine 11 when the memory capacity of the computer device 12 is insufficient when the user requests allocation of memory capacity. When a user requests that a new virtual machine 11 be installed on the computer device 12, the user requests that the memory capacity be allocated to the new virtual machine 11 or that the existing virtual machine 11 in the computer device 12. When the memory capacity is insufficient, the memory capacity can be allocated to the existing virtual machine 11. A memory monitoring unit is provided inside each virtual machine 11, and the memory monitoring unit monitors the idle memory capacity of the virtual machine 11 in real time. The detection module 200 detects the idle memory capacity of each virtual machine 11 by the memory monitoring unit of each virtual machine 11. The calculation module 210 calculates the total idle memory capacity of all virtual machines 11 based on the idle memory capacity of each virtual machine 11, and determines whether the total idle memory capacity is less than the memory capacity to be allocated. It is used to make a decision. Since idle memory capacity is determined when a user requests allocation of memory and hence it is a real-time determination and deliberation of the idle space in a storage space of the memory.);
Both Dropp/Davda and Yi represent works within the same field of endeavor, namely information processing devices focusing data storage and retrieval operations. It would therefore have been obvious to one of ordinary skill in the art before the claimed invention was effectively filed to apply Dropp/Davda in view of Yi as it represents a combination of known prior art elements according to known methods (Multiprocessing system of Dropp/Davda determining and providing capacity of idle memory/space in real-time as used in Yi) to develop a better shared storage system leading to a more efficient computing system (see also Yi [0011-0012]).
Regarding claim 14, this is a non-transitory computer-readable storage medium claim corresponding to the method claim 1, and is rejected for the same reasons mutatis mutandis.
Regarding claim 6 Dropps disclose: A computer system, comprising: a first host (Dropps [0016], Fig.1 combination of 105-1 and 107-1), comprising: a first CPU (Dropps [0016], Fig.1, processor 105-1); a first memory communicatively coupled to the first CPU (Dropps [0016], Fig.1, memory 107-1); and a first cache coherence device communicatively coupled to the first CPU (Dropps:[0014] teaches the node controllers 103 are communicatively coupled to one another via a fabric (or fabric connection) 101. The node controllers 103 are configured to provide certain management functions on behalf of respective compute nodes including managing cache coherence and/or implementing cache coherence protocols or other memory access protocols. So, the node controller 1 contains the function of cache coherence device); a second host (Dropps [0016], Fig.1 combination of 105-3 and 107-3), comprising: a second CPU (Dropps [0016], Fig.1, processor 105-3); and a second cache coherence device communicatively coupled to the second CPU (Dropps:[0014] teaches the node controllers 103 are communicatively coupled to one another via a fabric (or fabric connection) 101. The node controllers 103 are configured to provide certain management functions on behalf of respective compute nodes including managing cache coherence and/or implementing cache coherence protocols or other memory access protocols. So, the node controller 2 contains the function of cache coherence device); and a switch communicatively coupled to the first cache coherence device and the second cache coherence device, wherein the second CPU is configured to send memory request information according to a storage space required for executing a task, and the memory request information is transmitted to the first host through the second cache coherence device and the switch to request the first memory to allocate a target space to the second CPU for executing the task (Dropps: abstract: teaches a controller and a controller memory wherein each individual memory block of the plurality of memory blocks (m01-m06, m15-m23, m50-m52, m77-m79, m91, m92) are each associated with a storage category of a plurality of storage categories, the plurality of storage categories comprising different cache coherence protocols for managing cache coherence for corresponding memory blocks (m01-m06, m15-m23, m50-m52, m77-m79, m91, m92) and determines whether a storage category associated with the first memory block is a first storage category or a second storage category where the first storage category comprises a first cache coherence protocol using a coherence directory comprising state and ownership information stored in the first memory (107-1) of the computer node (n1), and the second storage category comprises a second cache coherence protocol using a coherence directory stored in the controller memory of the controller (105-1) but not in the first memory (107-1) of the computer node (n1); and in response to determining that the storage category associated with the first memory block is the second storage category, determining whether the coherence directory stored in the controller memory of the controller (105-1) contains state and ownership information corresponding to the address included in the first request and sending a response to the first request based on state and ownership information corresponding to the address included in the first request storage category of the first memory block. Dropps: [0057], Fig. 1 teaches node controller 103-1 receiving a request for a memory block in one of its corresponding memories, triggered by a thread executing a processor. The request may be a remote request, meaning that it is received from the node controller 103-1 from one of the other node controllers (e. g. ,B. the node controller 103-2) on behalf of its local processors (and/or from one of the other processors themselves) of the system 100. Receiving request from processors in controller 103-2 for allocating memory block in node-controller 1 is similar to host 1 (controller 103-2) requesting memory allocation in host 2 (controller 103-1). Applicant used hosts making host 2 requesting memory from host 1 whereas in claim 1 host 1 was requesting memory from host 2. Since host 1 and host 2 are identical and any of them can be named host 1 and host 2. Hence examiner uses same teachings applied in claim 1 since it is simply considering host 1 (in claim 1) as host 2 (in claim 6) and host 2 (in claim 1) as host 1 (in claim 2).); wherein the first cache coherence device is configured to transmit a physical address of the target space to the second cache coherence device through the switch, and the second cache coherence device is configured to execute address translation to translate the physical address into a translated physical address; [and configured to provide a real-time capacity of an idle space in a storage space of the memory to the first host according to a detection result]; and wherein the second CPU is further configured to access the target space through the translated physical address (Dropps: [0033] teaches the node controllers 103 (and thus the processors 105) communicating with each other using switched connections. The structure 101 is used to transfer data and/or messages between or among one or more of the node controllers 103 and/or processors 105. Such communications include requests to read or write memory or cache blocks, in which case the node controllers 103 facilitate cache coherence via multiple concurrently implemented cache coherence protocols for each type of memory category. Facilitating read/write requests implies completing the request and hence accessing the location addressed by the read/write request).
Dropp teaches the claim limitations as stated above. However, Dropp did not explicitly disclose second cache coherence device of second host providing physical address to the first cache coherence device of host 1 which translates the address to a translated physical address.
Davda discloses:
wherein the first cache coherence device is configured to transmit a physical address of the target space to the second cache coherence device through the switch, and the second cache coherence device is configured to execute address translation to translate the physical address into a translated physical address; [and configured to provide a real-time capacity of an idle space in a storage space of the memory to the first host according to a detection result] and wherein the second CPU is further configured to access the target space through the translated physical address (Davda: [0018-0021] teaches a first address translation for translating a guest physical address of the second descriptor to a host physical address of the second descriptor to be prefetched and cached before the second received data is to be received and the second DMA operation is to be performed. Teaches a second address translation for translating a guest physical address of the second buffer to a host physical address of the second buffer to be prefetched and cached before the second received data is to be received and the second DMA operation is to be performed. The guest physical address is similar to the one sent by second host that allocated and provided memory space and the translated host physical address is similar to the one translated by first cache coherence device part of the first host. DMA operation initiated to access the second buffer and translating a guest physical address of the second buffer to a host physical address of the second buffer implies accessing the translated-addressed location.)
Both Dropp and Davda represent works within the same field of endeavor, namely information processing devices focusing data storage and retrieval operations. It would therefore have been obvious to one of ordinary skill in the art before the claimed invention was effectively filed to apply Dropp in view of Davda as it represents a combination of known prior art elements according to known methods (Multiprocessing system of Dropp translating address sent by second host to the first host (host requesting memory allocation) as used in Davda) to develop a better shared storage system leading to a more efficient computing system (see also Davda [0018-0021]).
Dropp/Davda teaches a host requesting storage space for executing a task and teaches first and second cache coherence device. However, Dropp/Davda did not explicitly disclose second host functioning as second cache coherence device providing real-time capacity of idle space.
Yi discloses:
wherein the first cache coherence device is configured to transmit a physical address of the target space to the second cache coherence device through the switch, and the second cache coherence device is configured to execute address translation to translate the physical address into a translated physical address; and configured to provide a real-time capacity of an idle space in a storage space of the memory to the first host according to a detection result; and wherein the second CPU is further configured to access the target space through the translated physical address (Yi: Espacenet translated version (attached as a pdf file) [0011-0012] teaches a detection module 200 detecting the idle memory capacity of each virtual machine 11 when the memory capacity of the computer device 12 is insufficient when the user requests allocation of memory capacity. When a user requests that a new virtual machine 11 be installed on the computer device 12, the user requests that the memory capacity be allocated to the new virtual machine 11 or that the existing virtual machine 11 in the computer device 12. When the memory capacity is insufficient, the memory capacity can be allocated to the existing virtual machine 11. A memory monitoring unit is provided inside each virtual machine 11, and the memory monitoring unit monitors the idle memory capacity of the virtual machine 11 in real time. The detection module 200 detects the idle memory capacity of each virtual machine 11 by the memory monitoring unit of each virtual machine 11. The calculation module 210 calculates the total idle memory capacity of all virtual machines 11 based on the idle memory capacity of each virtual machine 11, and determines whether the total idle memory capacity is less than the memory capacity to be allocated. It is used to make a decision. Since idle memory capacity is determined when a user requests allocation of memory and hence it is a real-time determination and deliberation of the idle space in a storage space of the memory.);
Both Dropp/Davda and Yi represent works within the same field of endeavor, namely information processing devices focusing data storage and retrieval operations. It would therefore have been obvious to one of ordinary skill in the art before the claimed invention was effectively filed to apply Dropp/Davda in view of Yi as it represents a combination of known prior art elements according to known methods (Multiprocessing system of Dropp/Davda determining and providing capacity of idle memory/space in real-time as used in Yi) to develop a better shared storage system leading to a more efficient computing system (see also Yi [0011-0012]).
Regarding claim 7 Dropp/Davda/Yi discloses: The computer system according to claim 6, wherein in response to the first host and the second host establishing a communication through the switch, the first cache coherence device is further configured to expose a partial space in the first memory to the second CPU for use by the second CPU, wherein the partial space comprises the target space (Dropps: [0055-0077], FIG. 4 teaches the invention where a memory request is processed which includes communication between the requesting and responding hosts and allocating requested memory space to the requesting host by the responding host if memory is available. Davda: [0023] teaches involving address translation that include reserving buffers in memory that include allocating the first one of the buffers to the application in response to receiving a request from the application via the interface. Davda: [0027] teaches performing address translation prefetching in which address translations associated with a DMA operation are caused to be prefetched (e.g., by an IOMMU) and cached (e.g., in an IOTLB used by the IOMMU) before a DMA operation that is to rely on the address translation is to be performed. Teaches allocating the buffers to which data is to be stored and/or from which data is to be read in known (e.g., contiguous) regions of memory).
Regarding claim 8 Dropp/Davda/Yi discloses: The computer system according to claim 7, wherein: the first memory comprises a local storage space and a remote storage space (Dropps: FIG. 1 memory 107-1 is used by processor 1 and a portion of it is also shared by other remote processors like processor 3,4 etc and hence memory 107-1 comprises both local memory and remote memory. Dropps: [0017] teaches memory may be local to one processor and remote to other processors. For example, in 1 Each of the memories (e.g., memory 107-1) may be considered or referred to as "local" to one of the processors (e.g., processor 105-1) to which it is communicatively coupled (e.g., directly connected). Any of the memories that are not local to a processor may be considered or referred to as "remote" from those processors. Likewise, the processors 105 and memories 107 (and/or the nodes n1 through n4) may be local or remote from one of the node controllers 103. For example, as in 1 node controller 103-1 is communicatively coupled to processors 105-1 and 105-2 (and thus their local memories 107-1 and 107-2). Therefore, processors 105-1 and 105-2 (and their local memories 107-1 and 107-2) are local to node controller 103-1, while the other processors and memories may be considered remote from node controller 103-1. It is understood that node controllers 103 may have any number of local processors and memories.); and in response to the first cache coherence device and the second cache coherence device establishing a communication through the switch, the first cache coherence device is further configured to expose a residual space of the remote storage space to the second cache coherence device (Dropps: [0055-0077], FIG. 4 teaches the invention where a memory request is processed which includes communication between the requesting and responding hosts and allocating requested memory space to the requesting host by the responding host if memory is available.).
Regarding claim 9 Dropp/Davda/Yi discloses: The computer system according to claim 8, wherein: the second host further comprises a second memory, and in response to the storage space required for executing a task by the second CPU being less than the residual space of the second memory, the second cache coherence device is further configured to request the target space from the first memory through the switch and the first cache coherence device (Dropps: [0033] teaches the node controllers 103 (and thus the processors 105) communicating with each other using switched connections. The fabric structure 101 (similar to a switch) is used to transfer data and/or messages between or among one or more of the node controllers 103 and/or processors 105. Such communications include requests to read or write memory or cache blocks, in which case the node controllers 103 facilitate cache coherence via multiple concurrently implemented cache coherence protocols for each type of memory category. Communicating read or write memory or cache blocks includes possibilities of second CPU/processor requesting memory space from memory attached to first CPU/processor (first memory).).
Claims 2, 11 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over DROPPS et al. (DE 102020108666 B4) [Dropps] in view of Davda et al. (US 20140281056 A1) in view of YI et al. (JP 2013196696 A)[Yi] in view of Gu et al. (US 20160234311 A1)
Regarding claim 2 Dropp/Davda/Yi discloses all the limitation of claim 1. However, Dropp/Davda/Yi did not explicitly disclose transmitting translated physical address by the requesting host/CPU to the second host.
Gu discloses: The method according to claim 1, wherein in response to the first CPU using the translated physical address to access the partial space by the first cache coherence device, transmitting the physical address mapped to the translated physical address to the second host through the switch, wherein the first cache coherence device records mapping information of the physical address and the translated physical address (Gu: [0011] teaches sending, by the cloud control device, an access request message for accessing the to-be-accessed data to a cloud control device on the side of the contributing node (similar to the second host, the host providing the memory) according to the identification information of the contributing node, so that the cloud control device on the side of the contributing node queries a second mapping relationship according to the identification information of the requesting node (similar to the first host, the host requesting memory allocation) in the access request message to acquire a second physical address of the to-be-accessed data, and transmits the second physical address of the to-be-accessed data and the access request message to the contributing node, so that the contributing node completes an access operation on the to-be-accessed data at the second physical address of the to-be-accessed data according to the access request message, where the access request message includes the identification information of the requesting node (transmitting physical address by the requesting node to the contributing node is similar to transmitting physical address by the CPU or first host to the second host and the device that handles this msg exchange is similar to the switch). Gu: [0010-0011] teaches recording first mapping relationship and second mapping relationship of the first physical address and the second physical address and is similar to recording mapping information of the physical address by the first cache coherence device.)
Both Dropp/Davda/Yi and Gu represent works within the same field of endeavor, namely information processing devices focusing data storage and retrieval operations. It would therefore have been obvious to one of ordinary skill in the art before the claimed invention was effectively filed to apply Dropp/Davda/Yi in view of Gu as it represents a combination of known prior art elements according to known methods (Multiprocessing system of Dropp/Davda/Yi sending physical address generated by the requesting/first host to contributing/second host as taught in Gu) to develop a better shared storage system leading to a more efficient computing system (see also Gu [0009-0011]).
Regarding claim 11, this is a system claim corresponding to the method claim 2, and is rejected for the same reasons mutatis mutandis.
Regarding claim 15, this is a non-transitory computer-readable storage medium claim corresponding to the method claim 2, and is rejected for the same reasons mutatis mutandis.
Claims 3, 4, 12, 13, 16 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over DROPPS et al. (DE 102020108666 B4) [Dropps] in view of Davda et al. (US 20140281056 A1) in view of YI et al. (JP 2013196696 A)[Yi] in view of ZHANG et al. (KR 20230172394 A) [Zhang]
Regarding claim 3 Dropp/Davda/Yi disclose: The method according to claim 1, wherein the first cache coherence device, the second cache coherence device and the switch communicate with each other through a cache coherence interconnection protocol (Dropps:[0033] teaches The structure 101, via which the node controllers 103 (and thus the processors 105) communicate with each other, may contain one or more switched connections. For example, the structure 101 include direct connections between the node controllers 103-1 and 103-2 (e.g.,B. to minimize latency). Accordingly, the structure 101 transfers data and/or messages between or among one or more of the node controllers 103 and/or processors 105. Such communications include, requests to read or write memory or cache blocks, in which case the node controllers 103 provide or facilitate cache coherence via multiple concurrently implemented cache coherence protocols for each type of memory category.).
Dropp/Davda/Yi did not explicitly disclose cache coherence interconnection protocol.
Zhang discloses:
The method according to claim 1, wherein the first cache coherence device, the second cache coherence device and the switch communicate with each other through a cache coherence interconnection protocol (Zhang (Espacement version): [0025] teaches storage device 120-1 and/or 120-2 (collectively referred to as storage device 120) supporting any desired protocol or protocols, including, for example, the Non-Volatile Memory Express (NVMe) protocol. Different storage devices 120 may support different protocols and/or interfaces including cache coherence interconnection protocols. An example of such a cache-coherent interconnect protocol is the Compute Express Link (CXL) protocol, which supports data access in blocks using the cxl.io protocol and bytes in bytes using the cxl.memory protocol.).
Both Dropp/Davda/Yi and Zhang represent works within the same field of endeavor, namely information processing devices focusing data storage and retrieval operations. It would therefore have been obvious to one of ordinary skill in the art before the claimed invention was effectively filed to apply Dropp/Davda/Yi in view of Zhang as it represents a combination of known prior art elements according to known methods (Multiprocessing system of Dropp/Davda/Yi using cache-coherent interconnect protocol to communicate between requesting/first host and contributing/second host as used in Zhang) to develop a better reliable shared storage system leading to a more efficient computing system (see also Zhang [0025]).
Regarding claim 4 Dropp/Davda/Yi disclose: The method according to claim 1, wherein the first cache coherence device is configured to translate the memory request information sent by the first CPU in accordance with a first protocol into memory request information in accordance with a second protocol and to send the memory request information in accordance with the second protocol to the switch; wherein the second cache coherence device is configured to translate the memory request information transmitted by the switch in accordance with the second protocol into memory request information in accordance with the first protocol and to send the memory request information in accordance with the first protocol to a second CPU, and wherein the first protocol is in accordance with protocol provisions of a cache coherence interconnection protocol, and the second protocol is in accordance with the protocol provisions of a network protocol (Instant claim teaches translating the msg/data used while transfering/communicating from one logic block to another and this interfacing involves some protocol is essential in logic design for transferring signals/data from one block to another and Dropps and Zhang teaches the the same.
Dropps: [0033] teaches the node controllers 103 (and thus the processors 105) communicating with each other using switched connections. The fabric structure 101 (similar to a switch or a network switch) is used to transfer data and/or messages between or among one or more of the node controllers 103 and/or processors 105. Such communications include requests to read or write memory or cache blocks, in which case the node controllers 103 facilitate cache coherence via multiple concurrently implemented cache coherence protocols for each type of memory category.
Dropp/Davda/Yi did not explicitly disclose cache coherence interconnection protocol. Zhang discloses various communication protocols and or interfaces.
Zhang discloses:
wherein the second cache coherence device is configured to translate the memory request information transmitted by the switch in accordance with the second protocol into memory request information in accordance with the first protocol and to send the memory request information in accordance with the first protocol to a second CPU, and wherein the first protocol is in accordance with protocol provisions of a cache coherence interconnection protocol, and the second protocol is in accordance with the protocol provisions of a network protocol (Zhang: [0025] teaches storage device 120-1 and/or 120-2 (collectively referred to as storage device 120) supporting any desired protocol or protocols, including, for example, the Non-Volatile Memory Express (NVMe) protocol. Different storage devices 120 may support different protocols and/or interfaces including cache coherence interconnection protocols. An example of such a cache-coherent interconnect protocol is the Compute Express Link (CXL) protocol, which supports data access in blocks using the cxl.io protocol and bytes in bytes using the cxl.memory protocol.).
The reasons for obviousness regarding claim 4 are same as those applied for claim 3 above.
Regarding claim 13, this is a system claim corresponding to the method claim 4, and is rejected for the same reasons mutatis mutandis.
Regarding claim 16, this is a non-transitory computer-readable storage medium claim corresponding to the method claim 3, and is rejected for the same reasons mutatis mutandis.
Regarding claim 17, this is a non-transitory computer-readable storage medium claim corresponding to the method claim 4, and is rejected for the same reasons mutatis mutandis.
Regarding claim 12 Dropps/Davda teaches all the limitations of claim 6. However, Dropp/Davda/Yi did not explicitly disclose switch comprises compute express link switch.
Zhang discloses: The computer system according to claim 6, wherein the switch comprises a compute express link (CXL) switch or a network switch (Zhang: [0025] teaches storage device 120-1 and/or 120-2 (collectively referred to as storage device 120) supporting any desired protocol or protocols, including, for example, the Non-Volatile Memory Express (NVMe) protocol. Different storage devices 120 may support different protocols and/or interfaces including cache coherence interconnection protocols. An example of such a cache-coherent interconnect protocol is the Compute Express Link (CXL) protocol, which supports data access in blocks using the cxl.io protocol and bytes in bytes using the cxl.memory protocol.).
The reasons for obviousness regarding claim 12 are same as those applied for claim 3 above.
Claims 5, 10 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over DROPPS et al. (DE 102020108666 B4) [Dropps] in view of Davda et al. (US 20140281056 A1) in view of YI et al. (JP 2013196696 A)[Yi] in view of Im Eun-ji et al. (KR 101781063 B1) [Im]
Regarding claim 5 Dropp/Davda/Yi discloses all the limitation of claim 1. However, Dropp/Davda/Yi did not explicitly disclose monitoring a use state of the memory, and providing a monitoring result to the first cache coherence device. The method according to claim 1, wherein the computer system further comprises a resource monitoring module communicatively coupled to the memory, and the method further comprises: monitoring a use state of the memory, and providing a monitoring result to the first cache coherence device (Lim (Espacement version): [0073] teaches a main control unit 701 continuously grasping the information on the resource use state according to the execution of the task through the virtual node resource monitoring unit 703 and the resource status information storage unit 702. [0065] teaches the monitored resource including CPU, memory, and network bandwidth. The control unit is similar to cache coherence device).
Both Dropp/Davda/Yi and Im represent works within the same field of endeavor, namely information processing devices focusing data storage and retrieval operations. It would therefore have been obvious to one of ordinary skill in the art before the claimed invention was effectively filed to apply Dropp/Davda/Yi in view of Im as it represents a combination of known prior art elements according to known methods (Multiprocessing system of Dropp/Davda/Yi monitoring resource usage and tracking available memory as used in Im) to develop a more efficient shared storage system leading to a more efficient computing system (see also Im [0065-0073]).
Regarding claim 10, this is a system claim corresponding to the method claim 5, and is rejected for the same reasons mutatis mutandis.
Regarding claim 18, this is a non-transitory computer-readable storage medium claim corresponding to the method claim 5, and is rejected for the same reasons mutatis mutandis.
Conclusion
Applicant’s amendment necessitated the new grounds of rejection presented in this office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMAD S HASAN whose telephone number is (571)270-1737 and email address is mohammad.hasan@uspto.gov. The examiner can normally be reached on Mon-Fri 8-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tim Vo can be reached on 571-272-3642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/M.S.H/Examiner, Art Unit 2138
/SHAWN X GU/
Primary Examiner, AU2138