Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
The instant application having Application No. 18/825,003 has a total of 20 claims pending in the application; there are 2 independent claims and 18 dependent claims, all of which are ready for examination by the examiner.
The specification has not been checked to the extent necessary to determine the presence of all possible minor errors.
The first instance of all acronyms or abbreviation should be spelled out for clarity, whether or not considered well known in the art.
In the response to this Office action, the Examiner respectfully requests that support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line numbers in the specification and/or drawing figure(s). This will assist the Examiner in prosecuting this application.
Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
INFORMATION CONCERNING DRAWINGS
The applicant’s drawings submitted are acceptable for examination purposes.
STATUS OF CLAIM FOR PRIORITY IN THE APPLICATION
The instant application No. 18825003 filed 09/05/2024 Claims Priority from Provisional Application 63598135, filed 11/12/2023.
ACKNOWLEDGEMENT OF REFERENCES CITED BY APPLICANT
As required by M.P.E.P. 609(C), the applicant’s submission of the Information Disclosure Statement(s) dated 9/5/2024 is/are acknowledged by the examiner and the cited references have been considered in the examination of the claims now pending. As required by M.P.E.P 609 C(2), a copy (copies) of the PTOL-1449(s) initialed and dated by the examiner is/are attached to the instant office action.
REJECTIONS NOT BASED ON PRIOR ART
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 12-13 and 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
As per claim 12, the limitations “SEND busy” “RECV busy” and L1 render the claim indefinite since the meaning of these limitations is not clear within the scope of the claim. Appropriated correction/clarification is required.
Claim 13 for the reasons indicated above with respect to claim 12.
Claim 20 is rejected for the reasons indicated above with respect to claim 12.
REJECTIONS BASED ON PRIOR ART
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-5, 7-9 and 14-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nation et al. (US 2010/0161908) in view of Narad et al. (US 2006/0236011).
1. A computing system comprising a memory device and a plurality of computing engines, the memory device, comprising: [Nation taches Shared memory resource 250 and a number of processing systems 210 (figs. 2-4 and related text)]
a memory storage; and [Nation teaches “Shared memory resource 250 includes one or more memory appliances 252, 254, 256, 258” (par. 0047; figs. 2-4 and related text)]
a memory controller, used to utilize a plurality of registers to manage a plurality of memory spaces in the memory storage; [(Note the phrasing “used to utilize… to manage” do not require the claim controller perform the listed functionality but merely a controller capable of performing the listed functionality See MPEP 2111.04 and MPEP 2114) Nation teaches “To manage the mapping of global virtualized addresses to physical addresses in memory banks 410, memory controller 420 includes an MMU or equivalent page table (e.g., configuration registers)” (par. 0054) “Network controller 430 includes a set of configuration registers 440 that are programmable and used to identify memory regions that are supported by memory appliance 400. Configuration registers 440 include a number of configuration entries 450 that identify individual regions of memory supported by memory appliance 400.” (par. 0057; fig. 4 and related text)]
wherein the plurality of computing engines are used to (Note the phrasing “used to” does not require the engines perform the listed functionality but merely engines where the functionality is not expressly precluded See MPEP 2111.04 and MPEP 2114) execute a plurality of computations, read a plurality of consumed data from the memory storage for processing, and write the processed data as produced data to the memory storage; [Nation teaches “[0039] The present inventions are related to systems and methods for storing and accessing information, and more particularly to systems and methods for providing a randomly accessible memory that may be shared across multiple virtual machines or processors.” Where storing corresponds to producing data and reading to consuming data] but Nation does not expressly refer to reading as consumed data and writing as produced data
wherein the memory controller is configured to utilize a managing table to record [(Note the phrasing “table to record” does not require the table perform the listed functionality but merely engines where the functionality is not expressly precluded See MPEP 2111.04 and MPEP 2114. Additionally, a table recording different types of information may be interpreted as nonfunctional descriptive material since the material is not functionally related to the controller. "Nonfunctional descriptive material" includes but is not limited to music, literary works and a compilation or mere arrangement of data (MPEP section 2106.IV.B.1). Note that "Nonfunctional descriptive material cannot render nonobvious aninvention that would have otherwise been obvious." Ex parte Curry, See MPEP 2111.05) [Nation teaches “To manage the mapping of global virtualized addresses to physical addresses in memory banks 410, memory controller 420 includes an MMU or equivalent page table (e.g., configuration registers). The MMU, or equivalent structure manages memory banks 410 as pages (i.e., blocks of memory) that may be of defined size (e.g., 1K, 4K, 1 M, 1 G) and maps accesses received via network interface controller 430 that are "real addresses" or "global addresses" to physical addresses within memory banks 410.” (par. 0054; fig. 4 and related text)]
an identifier, [Nation teaches “Configuration registers 440 include a number of configuration entries 450 that identify individual regions of memory supported by memory appliance 400. “ (par. 0057; fig. 4 and related text) including Entry identifiers 450a and VMIDs 451a ]
a base address [Nation teaches “VM Base Address” 452 (par. 0057; fig. 4 and related text)],
a bound address, [Nation teaches “Memory range 453 indicates the amount of memory starting at virtual machine base address 452 that is identified by the particular configuration entry 450” (par. 0057; fig. 4 and related text); thus, the range representing a bound or last address identified by the configuration entry]
a delete size, [Nation teaches “ Page size 455 identifies a memory page granularity that allows the physical memory in memory banks 410 to be fragmented across the set of virtual machines. By doing this, large contiguous ranges of the physical memory do not have to be available for mapping to the virtual machine memory spaces. Rather, the mapped physical memory may consist of a number of smaller, non-contiguous regions that are combined to provide the range designed by the particular configuration entry 450.” (par. 0057; fig. 4 and related text)] …
two indicators of each of the registers [Nation teaches “a set of access attributes 454… Access attributes 454 identify the access rights to the identified memory region. Such access attributes may be, but are not limited to, read only or read/write. “ (par. 0057; fig. 4 and related text) where access attributes correspond to the claimed indicators].
Nation teaches [base and range addresses forming address spaces (par. 0061)] but does not expressly disclose a head pointer, a tail pointer.
With respect to reading as consumed data and writing as produced data… a head pointer, a tail pointer, Narad teaches [shared memory organized as ring where ring descriptors include head and tail pointers (pars. 0025 and 0034; figs. 1-2 and related text) and where “In this example, ring users 31a and 31b write to ring 41a (which is read by the GPP 14), ring user 31c writes to ring 41b (which is also read by GPP 14) and the GPP 14 writes to ring 41c (which in turn is read by ring user 31c). Thus, ring users 31a and 31b are producers and the GPP 14 is a consumer with respect to ring 41a, ring user 31 is a producer and the GPP 14a consumer with respect to ring 41b, and the GPP 14 is a producer and ring 31c is a consumer with respect to ring 41c. “ (par. 0037)].
Nation and Narad are analogous art because they are from the same field of endeavor of memory access and control.
Before the effective filing date of the claimed inventions, it would have been obvious to a person of ordinary skill in the art to modify Nation to have the management table include a head pointer and tail pointer as taught by Narad since doing so would provide the benefits of facilitating memory access operations as [“The head descriptor provides a pointer to the next entry or location to be read from the corresponding ring, while the tail pointer provide the next entry or location to be written in the corresponding ring” (par. 0034) where agents may share memory without contention (par. 0067)].
Therefore, it would have been obvious to combine Nation and Narad for the benefit of creating a storage system/method to obtain the invention as specified in claim 1.
2. The computing system of claim 1, wherein a corresponding one of the registers has a reserved memory space in the memory storage, the reserved memory space is defined by the base address and the bound address [Nation teaches “Virtual machine base address 452 identifies a beginning region of the real memory space of the respective virtual machine that is identified by the particular configuration entry 450. Memory range 453 indicates the amount of memory starting at virtual machine base address 452 that is identified by the particular configuration entry 450.” (par. 0057) and teaches “ comparing the virtual machine identifications against the identification of the requesting virtual machine, and then comparing the virtual machine base address(es) and range(s) that are supported for that particular virtual machine to determine whether the real address falls within the address space supported by memory appliance 400” (par. 0061) where the memory space assigned to the virtual machine based on the base address and range is interpreted as reserved for the virtual machine. Note the range represents a bound address or the last address assigned to the virtual machine. Narad teaches memory portions organized as rings (figs. 1-2 and related text) where “[0035] Each ring can be independently configured for size and can be independently located in memory (i.e., the different rings may not reside in a contiguous region of memory). Several techniques are applicable to ring size configuration. For example, a ring or group of rings could be configured by a control register indicating the ring size. Alternately, the ring size may be stored as data in a ring's descriptor. The size and the alignment of each memory array representing a ring may be restricted to a power of 2 to allow the full pointer to be stored in one location in the ring descriptor. By using the ring-size to determine which high-order bits to hold constant and which to include in the incrementing pointer, a ring base and an incrementing index for each ring can be stored efficiently in the ring's descriptor. Alternatively, one could support arbitrary alignment and/or arbitrary size of independently located rings by storing the ring upper_bound address and the ring size (or equivalently the ring_base and the ring size) and when the boundary is reached, the pointer is reset to the bound minus the size (or equivalently is set to the base value).”].
3. The computing system of claim 1, wherein the corresponding one of the registers has an available data with a start and an end, the start is pointed by the head pointer, and the end is pointed by the tail pointer, the head pointer and the tail pointer are located between the base address and the bound address [Nation teaches “[0057] Network controller 430 includes a set of configuration registers 440 that are programmable and used to identify memory regions that are supported by memory appliance 400. By programming configuration registers 440, memory appliance 400 can be programmed to operate as the main memory for a number of different virtual machines, with provisioned physical memory spaces assigned to respective virtual machines. Configuration registers 440 include a number of configuration entries 450 that identify individual regions of memory supported by memory appliance 400.” Thus, each region of memory identified by configuration registers is available to the given virtual machine (see par. 0061) but Nation does not expressly refer to these regions having a head and tail pointers; however, Narad teaches “[0041] Referring now to FIGS. 4-6, the head descriptor 50 contains data private to the consumer, the tail descriptor 52 contains data private to the producer and the public descriptor 54 contains a public version of the produce pointer communicated to the consumer. The head (consume) pointer stored in the head pointer field 70 provides the address of the next item (entry) to be read from the ring by a consume access operation (e.g., based on a ME generated `get` command). The tail pointer stored in the Tail_Ptr field 80 contains the address of the next item to be written to the ring by a produce access operation (e.g., as generated by a ME `put` command). In a preferred embodiment the head and tail pointers are initialized with the physical address (location in the shared memory) of the base of the ring data storage region 41. The Prev_Tail field 72 stores the most recently cached value of the public tail pointer. The C_Count 74 contains the amount of data (number of entries) on the ring available for a consume access operation.” Where the head and tail are within each memory ring (see figs. 1-2 and related text) which includes address bounds (see par. 0035)].
4. The computing system of claim 3, wherein in a read operation for the corresponding one of the registers to read the consumed data, a read pointer is located between the head pointer and the tail pointer associated with the available data [Narad teaches “The head descriptor provides a pointer to the next entry or location to be read from the corresponding ring, while the tail descriptor provides a pointer the next entry or location to be written in the corresponding ring, as indicated in FIG. 2 for ring `n`” (par. 0034) “[0041] Referring now to FIGS. 4-6, the head descriptor 50 contains data private to the consumer, the tail descriptor 52 contains data private to the producer and the public descriptor 54 contains a public version of the produce pointer communicated to the consumer. The head (consume) pointer stored in the head pointer field 70 provides the address of the next item (entry) to be read from the ring by a consume access operation (e.g., based on a ME generated `get` command). ” where data that may be consumed is considered available].
5. The computing system of claim 4, wherein the read operation for the corresponding one of the registers is performed based on a predefined read-ordering for the computing engines [Nation teaches “[0039] The present inventions are related to systems and methods for storing and accessing information, and more particularly to systems and methods for providing a randomly accessible memory that may be shared across multiple virtual machines or processors.” “[0069] Based on the disclosure provided herein, one of ordinary skill in the art will recognize that two or more virtual machines can share the same physical blocks in the memory appliance. The shared region is incorporated into the memory space of each of the accessing virtual machines such that an access in the distinct memory space of one virtual machine will access the overlapping memory region and an access to the distinct memory space of another virtual machine will access the same overlapping memory region. In some embodiments of the present invention where the overlapped region is a read/write region, memory coherency considerations exist. Depending on the coherence control point of the virtual machine, and on whether multiple machines are coherently sharing the memory of the appliance, the memory appliance may enforce coherence of any copies of the data that are cached in the virtual machine. For example, if the virtual machine maintains the coherence control point outside of the memory appliance, then the memory appliance simply responds to read and write accesses without regard for coherence. The coherence control point (i.e., one of the virtual machines associated with the memory appliance, another virtual machine, or external memory controller) outside of the memory appliance is then responsible for maintaining any necessary directory of pointers to cached copies, invalidating cached copies, enforcing order, or the like. In this case the memory appliance acts much as a basic memory controller would in a traditional computer system. As another example, the memory appliance may act as the memory coherence point for its portion of memory space in the virtual machine. In such a case, all accesses to the memory appliance space come directly to the memory appliance and the memory appliance is responsible for maintaining a directory of pointers to cached copies, invalidating cache copies when necessary, enforcing order, and the like…”. Where enforcing coherency order by the processors accessing data corresponds to maintaining read ordering by the processors or engines. Narad also teaches “[0067] Alternately, the different producers and consumers may independently access the ring data structures of a given ring. In such an implementation, mechanisms (e.g., a mutual-exclusion lock ("mutex")) may be used to resolve contention issues between the agents. For example, the head and tail descriptors may be protected by mutual-exclusion (mutex) locks that restrict access to the descriptors to one respective consumer or producer agent at a time. Alternately, mutexes may be used at finer granularity. For instance, one mutex may lock the private consumer pointer while another locks the private consumer credit count. Additionally, the multiple agents may maintain their own credit pools that they contribute to/take from the private producer/consumer credit pools.” Thus maintaining read order among agents].
7. The computing system of claim 1, wherein the corresponding one of the registers has a first available data with an end pointed by the tail pointer and a second available data with a start pointed by the head pointer, and an available memory space between the first available data and the second available data is defined by the tail pointer and the head pointer [Nation teaches “[0057] Network controller 430 includes a set of configuration registers 440 that are programmable and used to identify memory regions that are supported by memory appliance 400. By programming configuration registers 440, memory appliance 400 can be programmed to operate as the main memory for a number of different virtual machines, with provisioned physical memory spaces assigned to respective virtual machines. Configuration registers 440 include a number of configuration entries 450 that identify individual regions of memory supported by memory appliance 400.” Thus, each region of memory identified by configuration registers is available to the given virtual machine (see par. 0061). Thus, each region of memory identified by configuration registers is available to the given virtual machine (see par. 0061) but Nation does not expressly refer to these regions having a head and tail pointers; however, regarding these limitations, Narad teaches “[0041] Referring now to FIGS. 4-6, the head descriptor 50 contains data private to the consumer, the tail descriptor 52 contains data private to the producer and the public descriptor 54 contains a public version of the produce pointer communicated to the consumer. The head (consume) pointer stored in the head pointer field 70 provides the address of the next item (entry) to be read from the ring by a consume access operation (e.g., based on a ME generated `get` command). The tail pointer stored in the Tail_Ptr field 80 contains the address of the next item to be written to the ring by a produce access operation (e.g., as generated by a ME `put` command). In a preferred embodiment the head and tail pointers are initialized with the physical address (location in the shared memory) of the base of the ring data storage region 41. The Prev_Tail field 72 stores the most recently cached value of the public tail pointer. The C_Count 74 contains the amount of data (number of entries) on the ring available for a consume access operation.”].
8. The computing system of claim 7, wherein in a write operation for the corresponding one of the registers to write the produced data, a write pointer is located within the available memory space [Narad teaches “The tail pointer stored in the Tail_Ptr field 80 contains the address of the next item to be written to the ring by a produce access operation (e.g., as generated by a ME `put` command).” (par. 0041)].
9. The computing system of claim 8, wherein the write operation for the corresponding one of the registers is performed based on a predefined write-ordering for the computing engines [Nation teaches “[0039] The present inventions are related to systems and methods for storing and accessing information, and more particularly to systems and methods for providing a randomly accessible memory that may be shared across multiple virtual machines or processors.” “[0069] Based on the disclosure provided herein, one of ordinary skill in the art will recognize that two or more virtual machines can share the same physical blocks in the memory appliance. The shared region is incorporated into the memory space of each of the accessing virtual machines such that an access in the distinct memory space of one virtual machine will access the overlapping memory region and an access to the distinct memory space of another virtual machine will access the same overlapping memory region. In some embodiments of the present invention where the overlapped region is a read/write region, memory coherency considerations exist. Depending on the coherence control point of the virtual machine, and on whether multiple machines are coherently sharing the memory of the appliance, the memory appliance may enforce coherence of any copies of the data that are cached in the virtual machine. For example, if the virtual machine maintains the coherence control point outside of the memory appliance, then the memory appliance simply responds to read and write accesses without regard for coherence. The coherence control point (i.e., one of the virtual machines associated with the memory appliance, another virtual machine, or external memory controller) outside of the memory appliance is then responsible for maintaining any necessary directory of pointers to cached copies, invalidating cached copies, enforcing order, or the like. In this case the memory appliance acts much as a basic memory controller would in a traditional computer system. As another example, the memory appliance may act as the memory coherence point for its portion of memory space in the virtual machine. In such a case, all accesses to the memory appliance space come directly to the memory appliance and the memory appliance is responsible for maintaining a directory of pointers to cached copies, invalidating cache copies when necessary, enforcing order, and the like…”. Where enforcing coherency order by the processors storing data corresponds to maintaining write ordering by the processors or engines. Narad also teaches “[0067] Alternately, the different producers and consumers may independently access the ring data structures of a given ring. In such an implementation, mechanisms (e.g., a mutual-exclusion lock ("mutex")) may be used to resolve contention issues between the agents. For example, the head and tail descriptors may be protected by mutual-exclusion (mutex) locks that restrict access to the descriptors to one respective consumer or producer agent at a time. Alternately, mutexes may be used at finer granularity. For instance, one mutex may lock the private consumer pointer while another locks the private consumer credit count. Additionally, the multiple agents may maintain their own credit pools that they contribute to/take from the private producer/consumer credit pools.” Thus maintaining write order among agents].
14. A memory managing method for a computing system, wherein the computing system includes a memory device and a plurality of computing engines, the memory managing method comprising: storing a plurality of consumed data by a memory storage of the memory device; utilizing a plurality of registers to manage a plurality of memory spaces in the memory storage, by a memory controller of the memory device; and executing a plurality of computations, reading the consumed data from the memory storage for processing, and writing the processed data as produced data to the memory storage, by the computing engines; wherein a managing table is utilized by the memory controller to record an identifier, a base address, a bound address, a delete size, a head pointer, a tail pointer, and two indicators of each of the registers [The rationale in the rejection of claim 1 is herein incorporated].
15. The memory managing method of claim 14, wherein a corresponding one of the registers has a reserved memory space in the memory storage, the reserved memory space is defined by the base address and the bound address [The rationale in the rejection of claim 2 is herein incorporated].
16. The memory managing method of claim 14, wherein the corresponding one of the registers has an available data with a start and an end, the start is pointed by the head pointer, and the end is pointed by the tail pointer, the head pointer and the tail pointer are located between the base address and the bound address [The rationale in the rejection of claim 3 is herein incorporated].
17. The memory managing method of claim 16, further comprising: in a read operation for the corresponding one of the registers to read the consumed data, locating a read pointer between the head pointer and the tail pointer associated with the available data [The rationale in the rejection of claim 4 is herein incorporated].
18. The memory managing method of claim 14, wherein the corresponding one of the registers has a first available data with an end pointed by the tail pointer and a second available data with a start pointed by the head pointer, and an available memory space between the first available data and the second available data is defined by the tail pointer and the head pointer [The rationale in the rejection of claim 7 is herein incorporated].
19. The memory managing method of claim 18, further comprising: in a write operation for the corresponding one of the registers to write the produced data, locating a write pointer is within the available memory space [The rationale in the rejection of claim 8 is herein incorporated].
Claim(s) 6 and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nation et al. (US 2010/0161908) in view of Narad et al. (US 2006/0236011) as applied in the rejection of claims 4 and 8 above, and further in view of Lee et al. (US 2017/0256016).
6. The computing system of claim 4, wherein when the read operation has a read size greater than a size of the available data, the read operation is stalled [Nation teaches “[0061] In operation, a real address is received from a requesting virtual machine over the network via network interface controller 430. Network interface controller 430 may use configuration registers 440 to determine whether it supports the requested memory location. This may be done by comparing the virtual machine identifications against the identification of the requesting virtual machine, and then comparing the virtual machine base address(es) and range(s) that are supported for that particular virtual machine to determine whether the real address falls within the address space supported by memory appliance 400. In one particular embodiment of the present invention, a requested address includes some number of bits of the global address range, followed by some number of bits indicating a particular address within the virtual machine. In the case where multiple servers are dynamically combined into a virtual server, the network routing ID can include multiple values taken from a set of valid values according to the assignment of hardware compute resources to the virtual machine. This set of valid values can change over time if the set of compute resources is dynamically configurable. Where the requested address does fall within the supported address space, the real address is used to determine a corresponding physical address within memory banks 410.”] but does not expressly disclose when the read operation has a read size greater than a size of the available data, the read operation is stalled; however, regarding these limitations, Lee teaches [“[0074] Each circular buffer read client's SXB reader 906 reads data from SRAM memories 607a-607x via the sparse crossbar 904. Reads are performed from a physical memory region between BASE ADDRESS and BASE ADDRESS+SIZE. Once a memory region is consumed, the tail pointer for the respective circular buffer read client is updated and the tail pointer broadcaster 918 is notified of the updated tail pointer value. The head pointer from the head pointer monitor 920 is used to stall reads that are outside of the valid range defined by the head pointer. Stalled reads are resumed once a change in the head pointer puts the read within the valid range.”].
Nation, Narad and Lee are analogous art because they are from the same field of endeavor of memory access and control.
Before the effective filing date of the claimed inventions, it would have been obvious to a person of ordinary skill in the art to modify the combination of Nation and Narad to include when the read operation has a read size greater than a size of the available data, the read operation is stalled as taught by Lee since doing so would provide memory protection and prevent read accesses outside permitted ranges.
Therefore, it would have been obvious to combine Nation, Narad and Lee for the benefit of creating a storage system/method to obtain the invention as specified in claim 6.
10. The computing system of claim 8, wherein when the write operation has a write size greater than a size of the available memory space, the write operation is stalled [Nation teaches “[0061] In operation, a real address is received from a requesting virtual machine over the network via network interface controller 430. Network interface controller 430 may use configuration registers 440 to determine whether it supports the requested memory location. This may be done by comparing the virtual machine identifications against the identification of the requesting virtual machine, and then comparing the virtual machine base address(es) and range(s) that are supported for that particular virtual machine to determine whether the real address falls within the address space supported by memory appliance 400. In one particular embodiment of the present invention, a requested address includes some number of bits of the global address range, followed by some number of bits indicating a particular address within the virtual machine. In the case where multiple servers are dynamically combined into a virtual server, the network routing ID can include multiple values taken from a set of valid values according to the assignment of hardware compute resources to the virtual machine. This set of valid values can change over time if the set of compute resources is dynamically configurable. Where the requested address does fall within the supported address space, the real address is used to determine a corresponding physical address within memory banks 410.”] but does not expressly disclose when the write operation has a write size greater than a size of the available memory space, the write operation is stalled; however, regarding these limitations, Lee teaches [“[0071] Referring back to FIG. 9, the interaction between components of vCF 905 during basic operation is illustrated. Before a transfer through a circular buffer begins, circular buffer write clients and circular buffer read clients are configured by a controller with the circular buffer parameters 925 (BASE ADDRESS 926, SIZE 927, WRITER ID 929 and, assuming the single circular buffer reader implementation, READER ID 928). The circular buffer write client's SXB writer 502 writes data into SRAM memories 607a-607x via the SXB 904. The SXB writer 902 writes to a physical memory region between BASE ADDRESS 926 and BASE ADDRESS 926 plus SIZE 927. Upon a successful write, the head pointer 801 is updated and the head pointer broadcaster 910 is notified of the updated head pointer value. The circular buffer's tail pointer 803 from the tail pointer monitor 912 is used to stall writes that are outside of the legal or valid range defined by the tail pointer 803. Stalled writes are resumed once a change in the tail pointer 803 puts the write within the valid range.”].
Nation, Narad and Lee are analogous art because they are from the same field of endeavor of memory access and control.
Before the effective filing date of the claimed inventions, it would have been obvious to a person of ordinary skill in the art to modify the combination of Nation and Narad to include when the write operation has a write size greater than a size of the available memory space, the write operation is stalled as taught by Lee since doing so would provide memory protection and prevent write accesses outside permitted ranges.
Therefore, it would have been obvious to combine Nation, Narad and Lee for the benefit of creating a storage system/method to obtain the invention as specified in claim 10.
Claim 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nation et al. (US 2010/0161908) in view of Narad et al. (US 2006/0236011) as applied in the rejection of claim 1 and further in view of Frazier et al. (US 9,547,484).
11. The computing system of claim 1, wherein the memory controller has a compiler for maintaining the base address, the bound address and the delete size in a software level [Nation teaches maintaining base address, memory range and size (fig. 4 and related text) and explains “ As another example, such memory appliances may be employed in relation to a software based virtual machine monitor (e.g., a hypervisor system) that allows multiple operating systems to run on a host computer at the same time. In such a system where memory can be allocated externally to a server, `n` processors (e.g., servers) may be allowed to share the memory appliance(s),” (par. 0053)] but does not expressly disclose a compiler; however, regarding these limitations, Frazier teaches [“ compiler 120 that compiles software (e.g., application 107, source code 108, hypervisor 106, etc.), and the compiler 120 is configured to execute on the processor 101 of the computer system 100” (col. 10, lines 56-60)].
Nation, Narad and Lee are analogous art because they are from the same field of endeavor of memory access and control.
Before the effective filing date of the claimed inventions, it would have been obvious to a person of ordinary skill in the art to modify the combination of Nation and Narad to include a compiler to compile software such as the virtual machine monitor or hypervisor of the combination of Nation and Narad as taught by Frazier since doing so would provide the benefits of allowing for optimized software compiling and execution of the hypervisor software (col. 10, lines 56-60).
Therefore, it would have been obvious to combine Nation, Narad and Frazier for the benefit of creating a storage system/method to obtain the invention as specified in claim 11.
Claims 12-13 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nation et al. (US 2010/0161908) in view of Narad et al. (US 2006/0236011) as applied in the rejection of claim 1 and further in view of Chandra et al. (US 2011/0208928).
12. The computing system of claim 1, wherein the indicators include a “SEND busy” indicator and a “RECV busy” indicator, and the memory controller has a L1 managing unit for controlling the head pointer, the tail pointer, the “SEND busy” indicator and the “RECV busy” indicator in a hardware level [Note the phrasing “for controlling” does not require the managing unit actually perform the listed functionality but merely that the functionality not be expressly precluded See MPEP 2111.04 and MPEP 2114. Nation teaches “[0057] Network controller 430 includes a set of configuration registers 440 that are programmable and used to identify memory regions that are supported by memory appliance 400. By programming configuration registers 440, memory appliance 400 can be programmed to operate as the main memory for a number of different virtual machines, with provisioned physical memory spaces assigned to respective virtual machines. Configuration registers 440 include a number of configuration entries 450 that identify individual regions of memory supported by memory appliance 400.” Thus, teaching controller including a managing unit. Narad teaches “[0067] Alternately, the different producers and consumers may independently access the ring data structures of a given ring. In such an implementation, mechanisms (e.g., a mutual-exclusion lock ("mutex")) may be used to resolve contention issues between the agents. For example, the head and tail descriptors may be protected by mutual-exclusion (mutex) locks that restrict access to the descriptors to one respective consumer or producer agent at a time. Alternately, mutexes may be used at finer granularity. For instance, one mutex may lock the private consumer pointer while another locks the private consumer credit count. Additionally, the multiple agents may maintain their own credit pools that they contribute to/take from the private producer/consumer credit pools.” (see par. 0068), thus teaching a lock or busy status for either producer or consumer according to the lock or mutex] but the combination of Nation and Narad does not expressly disclose a “SEND busy” indicator and a “RECV busy” indicator; however, regarding these limitations, Chandra teaches [“The status of each buffer, READY_TO_WRITE, READY_TO_READ, or BUSY, is also indicated.” (par. 0042) “[0044] At time t2, the reading module 214 starts reading data from the snapshot image partition. The three producer threads pick up the first free buffers 502-506, after which the write pointer 516 points to buffer 508, and the status of buffers 502-506 is updated to BUSY… [0045] The read operations for the producer threads may complete in any order, depending on thread scheduling and network latency. At time t3, data has been read into buffer 504, and the status of buffer 504 has been updated to READY_TO _READ, indicating that data in buffer 504 can now be read by the consumer thread, while buffers 502 and 506 are still BUSY. In addition, a producer thread picks up buffer 508, changing its status to BUSY, and the queue write pointer 516 now points to buffer 510, the next free buffer in the sequence… [0046] It should be noted that although buffer 504 is prepared to be read, the queue read pointer 514 points to buffer 502, and hence the consumer thread must wait until buffer 502 is ready to be read, ensuring that data is read by the consumer thread in the physical snapshot image block sequence.” (see fig. 5 and related text) where busy and not ready to read status corresponds to a SEND busy indicator and busy and not yet ready to write status corresponds to a RECV busy indicator].
Nation, Narad and Chandra are analogous art because they are from the same field of endeavor of memory access and control.
Before the effective filing date of the claimed inventions, it would have been obvious to a person of ordinary skill in the art to modify the combination of Nation and Narad to include “SEND busy” indicator and a “RECV busy” indicator as taught by Chandra since doing so would provide the benefits facilitating read and write operation order in a producer consumer configuration.
Therefore, it would have been obvious to combine Nation, Narad and Chandra for the benefit of creating a storage system/method to obtain the invention as specified in claim 12.
13. The computing system of claim 12, wherein only one write operation associated with the “RECV busy” indicator or only one read operation associated with the “SEND busy” indicator are executed at the same time for each of the registers [According to Chandra, for each of the buffers in buffer queue, only one update to busy, ready to read or ready to write is done at a time (see fig. 5 and related text)].
20. The memory managing method of claim 14, wherein the indicators include a “SEND busy” indicator and a “RECV busy” indicator, and the memory managing method further comprising: controlling the head pointer, the tail pointer, the “SEND busy” indicator and the “RECV busy” indicator in a hardware level, by a L1 managing unit of the memory controller; wherein only one write operation associated with the “RECV busy” indicator or only one read operation associated with the “SEND busy” indicator is executed at the same time for each of the registers [The rationale in the rejection of claims 12-13 is herein incorporated].
RELEVANT ART CITED BY THE EXAMINER
The following prior art made of record and not relied upon is cited to establish the level of skill in the applicant’s art and those arts considered reasonably pertinent to applicant’s disclosure. See MPEP 707.05(c).
Dally et al. (US 2014/0380002) teaches [“[0035] In one embodiment, a portion of the queue state block also includes a field related to pending request handling so that a list of pending put requests and a list of pending get requests that have been deferred. As previously explained, a request may fail because the put reserve pointer 230 or the get reserve pointer 240 cannot be advanced.” “[0036]… A pending put state (PP) is a pointer to a data structure that stores the pending put requests and a pending get state (PG) is a pointer to a data structure that stores the pending get requests. In one embodiment, the pending put state and the pending get state may each be an offset relative to the pending state address. Handling pending requests instead of simply returning a fail response for requests that cannot be processed may improve the efficiency of accessing the two-phase queue 200 because producers and/or consumers do not need to "retry" requests that failed.” (see fig. 3A and related text)].
CLOSING COMMENTS
a. STATUS OF CLAIMS IN THE APPLICATION
a(1) CLAIMS REJECTED IN THE APPLICATION
Per the instant office action, claims 1-20 have received a first action on the merits and are subject of a first action non-final.
b. DIRECTION OF FUTURE CORRESPONDENCES
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YAIMA RIGOL whose telephone number is (571)272-1232. The examiner can normally be reached Monday-Friday 9:00AM-5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jared I. Rutz can be reached on (571) 272-5535. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
November 6, 2025
/YAIMA RIGOL/
Primary Examiner, Art Unit 2135