DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This Action is in response to communications filed 11/18/2024.
Claims 1-20 are pending.
Claims 1-20 are rejected.
Priority
Applicant’s priority claim to foreign documents KR10-2024-0101522 filed 07/31/2024 and KR10-2023-0160264 filed 11/20/2023 is herein acknowledged.
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
As required by M.P.E.P. 609(C), the applicant’s submission of the Information Disclosure Statement dated 11/18/2024 is acknowledged by the examiner and the cited references have been considered in the examination of the claims now pending. As required by M.P.E.P 609 C(2), a copy of the PTOL-1449 initialed and dated by the examiner is attached to the instant office action.
Drawings
The applicant’s drawings submitted on 11/18/2024 are acceptable for examination purposes.
Claim Objections
Claims 6 and 16 are objected to because of the following informalities:
Claim 6 recites “analyzing a memory access pattern type as a change in a page table is sensed by occurrence”. Herein the limitation contains a grammar issue affecting the clarity of the limitation. The Examiner suggests amending the limitation to clearly indicate the analyzing step and the involved elements.
Claim 16 recites the same issue as claim 6.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 3-5 and 11-14 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
Claim 3 recites “to a virtual memory area descriptor”. Claim 2, from which claim 3 depends, recites “of a virtual memory area descriptor” and therefore the recitation in claim 3 lacks proper antecedent basis with respect to claim 2.
Claim 4 recites “a memory access pattern ID and a memory access pattern type” and “a virtual memory area descriptor”. Claim 2, from which claim 4 depends, recites “a memory access pattern ID and a memory access pattern type” and “a virtual memory area descriptor” and therefore the recitations in claim 4 lacks proper antecedent basis with respect to claim 2.
Claim 5 recites “at an identical code location”. Claim 3 distinctly recites “a memory area corresponding to one of a code area”. The lack of consistency of language between claims raises a clarity issue regarding if the code location in claim 5 is referring to the similar element as the code area in claim 3. For purposes of the current action, it is interpreted to recite the same element but it is suggested to amend the language in at least one of the claims to improve the clarity of the limitations.
Claim 11 recites “managing memory areas based on a result of analysis” wherein the claim previously recites “analyzing a memory area”. It is unclear how the assigning limitation is properly associated with the analyzing limitation of the claim as the context of the limitations varies in scope regarding a singular memory area versus a plural memory areas. Additionally, it is not clearly indicated that the analyzing step is linked to “result of analysis” based on the present language.
Dependent claims 12-14 do not resolve the issue.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 11-14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to the abstract idea of encompassing mental processes without significantly more.
The claim 11 recites “analyzing a memory area that is newly allocated as a certain function is called from an application or a container” and “assigning a memory access pattern ID and a memory access pattern type of a virtual memory area descriptor for managing memory areas based on a result of analysis.” These identified steps are acts of evaluating information that can be practically performed in the human mind. Thus, these steps are directed to the abstract idea of “mental processes” grouping.
This judicial exception is not integrated into a practical application because the additionally recited generic computer elements do not add meaningful limitation to the abstract idea beyond merely indicating the field of use or technological environment. Specifically, the “analyzing” is an insignificant extra-solution activity wherein data is gather which is well-understood, routine, conventional computer functions. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because “an application or a container” is a generic computer construct performing generic compute functions that are well-understood, routine, and conventional in the art. These functions are recognized by the court decisions as identified in MPEP § 2106.05(d).
Claim 12 recites “assigning an identical memory access pattern ID and an identical memory access pattern type to a virtual memory area descriptor for a memory area corresponding to one of a code area or a stack area.” The identified step is an act of evaluating information that can be practically performed in the human mind. Thus, these steps are directed to the abstract idea of “mental processes” grouping. This judicial exception is not integrated into a practical application because the additionally recited generic computer elements do not add meaningful limitation to the abstract idea beyond merely indicating the field of use of technological environment.
Claim 13 recites “as a process is duplicated to create a child process from a parent process through a fork function, assigning a memory access pattern ID and a memory access pattern type identical to those of a duplication target virtual memory area descriptor to a virtual memory area descriptor that is duplicated together with the process.” The identified step is an act of evaluating information that can be practically performed in the human mind. Thus, these steps are directed to the abstract idea of “mental processes” grouping. This judicial exception is not integrated into a practical application because the additionally recited generic computer elements do not add meaningful limitation to the abstract idea beyond merely indicating the field of use of technological environment.
Claim 14 recites “assigning an identical memory access pattern ID and an identical memory access pattern type to memory areas allocated at an identical code location.” The identified step is an act of evaluating information that can be practically performed in the human mind. Thus, these steps are directed to the abstract idea of “mental processes” grouping. This judicial exception is not integrated into a practical application because the additionally recited generic computer elements do not add meaningful limitation to the abstract idea beyond merely indicating the field of use of technological environment.
The Examiner suggested amendments to incorporate a directed improvement to the computer functionality which therefore renders the claim as eligible for patentability. See MPEP § 2106.06(b).
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 6-9, and 15-19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Sen et al. (US 2021/0019069).
Regarding claim 1, Sen teaches an apparatus for managing a disaggregated memory based on memory access pattern recognition, comprising: a memory configured to store at least one program; and a processor configured to execute the program, ([0193] Memory subsystem 2620 represents the main memory of system 2600 and provides storage for code to be executed by processor 2610, or data values to be used in executing a routine. Memory subsystem 2620 can include one or more memory devices 2630 such as read-only memory (ROM), flash memory, one or more varieties of random access memory (RAM) such as DRAM, or other memory devices, or a combination of such devices. Memory 2630 stores and hosts, among other things, operating system (OS) 2632 to provide a software platform for execution of instructions in system 2600.) wherein the program is configured to: recognize a pattern of memory access to a disaggregated memory including a local memory and a remote memory of an application or a container and manage the disaggregated memory based on the recognized memory access pattern ([0118] ML engine 2004 could be trained at run time or loaded with neural engine weight coefficients or values from off-line training or weights used by another ML engine trained to identify access patterns by a requester tenant, application or device. For example, in a setting where the memory pool and compute nodes run a known workload, ML engine 2004 could be set up with a previously-generated training that is refined or modified based on run time memory access behaviors. [0119] Moving or copying data associated with predicted next or subsequent memory accesses from higher latency memory 2014 to lower or medium latency memory can reduce an amount of time needed to complete a memory access request as time to access the data can be reduced if the data is accessed from lower or medium latency memory compared to that of higher latency memory. ML engine 2004 can store access patterns of accessed memory addresses and lengths of accessed data in a memory mappings table 2008. [0128] FIG. 22 shows a system that can be used to manage data or content copying or movement between different tiers of memory or storage based on temperature of data or content. Platform 2200 can access locally attached memory and storage, disaggregated memory and storage, and/or remote memory and storage), and recognize memory access patterns for respective memory areas managed by at least one virtual memory area descriptor when the memory access pattern is recognized ([0180] The process can be performed by a memory management unit in connection with a memory access request. A memory access request can request a read or write operation involving local memory and/or remote memory. The memory access request can be issued by a CPU or other device (e.g., accelerator or network interface card). A virtual address can be associated with the memory access request. The virtual address can include a page identifier (ID) 2502, a sub-page identifier 2504, and a sub-page offset 2506. [0185] Some embodiments of the process of FIG. 25B may learn access patterns of sub-pages within a page.). Herein Sen discloses a memory subsystem which services access requests to a disaggregated memory system. It is explicitly noted that an access pattern of a tenant, application or device is identified and stored which is used to manage the storage of data in the disaggregated memory system. Sen also explicitly discloses detecting access patterns of an entity including a virtual machine or container. Furthermore, it is noted that the allocated memory to the requestor application which accesses data via a virtual address that includes page and sub-page identifiers can be analyzed to determine access patterns of sub-pages within a page. On this basis, as it is additionally noted that access requests involve both local and remote memory, Sen further discloses identifying access patterns for respective memory areas managed by at least one virtual memory area descriptor as associated with and allocated to the application. In this manner, it is determined Sen discloses all of the claimed features.
Regarding claim 6, Sen further discloses the apparatus of claim 1, wherein the program is configured to optimize disaggregated memory performance based on a result of analyzing a memory access pattern type as a change in a page table is sensed by occurrence of a page fault or by a Memory Management Unit (MMU) notifier ([0173] Page faults that trigger allocation, mapping and permissions checking of a local physical page for a remote pooled page can occur when the page is first accessed and subsequent accesses to invalid sub-pages do not require a page fault. Rather, invalid sub-pages can be accessed from remote memory and copied to local memory. Identifying valid and invalid sub-pages can allow a page-based memory pooling system to store in local memory only a subset of the sub-pages of a remotely pooled page.). Herein Sen discloses performing memory management operations in response to page faults by allocating and moving data between local and remote storage. This is determined to be analogous to the optimizing of disaggregated memory performance as sensed by occurrence of a page fault.
Regarding claim 7, Sen further discloses the apparatus of claim 6, wherein the program is configured to change a transfer unit of data to be transmitted between the remote memory and the local memory in optimizing the disaggregated memory performance ([0173]). Herein Sen explicitly notes copying data from remote memory to local memory in response to a page fault. This copying of data between remote and local storage is determined as analogous to the change a transfer unit a data to be transmitted between the remote memory and the local memory.
Regarding claim 8, Sen further discloses the apparatus of claim 6, wherein the program is configured to prefetch data stored in the remote memory, predicted to be subsequently accessed depending on the memory access pattern, to the local memory in optimizing the disaggregated memory performance ([0171] Various embodiments can be applied in use cases such as one or more of: (i) fetching the requested cache line first; (ii) fetching only a subset of the cache lines from a remote page; (iii) fetching cache lines in some arbitrary order determined by some prefetch scheme; (iv) learning the sub-page access pattern of an application and using this learned pattern to drive the use-cases (ii) and (iii) above). Herein Sen explicitly discloses using information of learned access patterns to execute on prefetching data from remote memory to store into local memory in order to optimize performance.
Regarding claim 9, Sen further discloses the apparatus of claim 6, wherein the program is configured to pin data to one of the local memory or the remote memory in optimizing the disaggregated memory performance ([0171] Various embodiments can be applied in use cases such as one or more of: … or (v) identifying pages that are “densely” accessed (e.g., most sub-pages are accessed) and using this information to help classify pages as “hot” or frequently accessed and deeming preferential treatment (e.g., by pinning these pages in local memory and reducing likelihood of eviction from local memory or cache).). Herein Sen explicitly discloses the capability of pinning pages in local memory in response to determinations that the pages are frequently accessed or “hot” in order to optimize storage performance.
Regarding claim 15, Sen teaches a method for managing a disaggregated memory based on memory access pattern recognition, comprising: recognizing a pattern of memory access to a disaggregated memory including a local memory and a remote memory of an application or a container; and managing the disaggregated memory based on the recognized memory access pattern ([0118] ML engine 2004 could be trained at run time or loaded with neural engine weight coefficients or values from off-line training or weights used by another ML engine trained to identify access patterns by a requester tenant, application or device. For example, in a setting where the memory pool and compute nodes run a known workload, ML engine 2004 could be set up with a previously-generated training that is refined or modified based on run time memory access behaviors. [0119] Moving or copying data associated with predicted next or subsequent memory accesses from higher latency memory 2014 to lower or medium latency memory can reduce an amount of time needed to complete a memory access request as time to access the data can be reduced if the data is accessed from lower or medium latency memory compared to that of higher latency memory. ML engine 2004 can store access patterns of accessed memory addresses and lengths of accessed data in a memory mappings table 2008. [0128] FIG. 22 shows a system that can be used to manage data or content copying or movement between different tiers of memory or storage based on temperature of data or content. Platform 2200 can access locally attached memory and storage, disaggregated memory and storage, and/or remote memory and storage), wherein, when the memory access pattern is recognized, memory access patterns are recognized for respective memory areas managed by at least one virtual memory area descriptor ([0180] The process can be performed by a memory management unit in connection with a memory access request. A memory access request can request a read or write operation involving local memory and/or remote memory. The memory access request can be issued by a CPU or other device (e.g., accelerator or network interface card). A virtual address can be associated with the memory access request. The virtual address can include a page identifier (ID) 2502, a sub-page identifier 2504, and a sub-page offset 2506. [0185] Some embodiments of the process of FIG. 25B may learn access patterns of sub-pages within a page.). Herein Sen discloses a memory subsystem which services access requests to a disaggregated memory system. It is explicitly noted that an access pattern of a tenant, application or device, and additionally a virtual machine or container, is identified and stored which is used to manage the storage of data in the disaggregated memory system. Furthermore, it is noted that the allocated memory to the requestor application which accesses data via a virtual address that includes page and sub-page identifiers can be analyzed to determine access patterns of sub-pages within a page. On this basis, as it is additionally noted that access requests involve both local and remote memory, Sen further discloses identifying access patterns for respective memory areas managed by at least one virtual memory area descriptor as associated with and allocated to the application. Claim 15 is rejected on a similar basis as claim 1.
Regarding claim 16, Sen further discloses the method of claim 15, further comprising: optimizing disaggregated memory performance based on a result of analyzing a memory access pattern type as a change in a page table is sensed by occurrence of a page fault or by a Memory Management Unit (MMU) notifier ([0173]). Claim 16 is rejected on a similar basis as claim 6.
Regarding claim 17, Sen further discloses the method of claim 16, wherein optimizing the disaggregated memory performance comprises: changing a transfer unit of data to be transmitted between the remote memory and the local memory ([0173]). Claim 17 is rejected on a similar basis as claim 7.
Regarding claim 18, Sen further discloses the method of claim 16, wherein optimizing the disaggregated memory performance comprises: prefetching data stored in the remote memory, predicted to be subsequently accessed depending on the memory access pattern, to the local memory ([0171]). Claim 18 is rejected on a similar basis as claim 8.
Regarding claim 19, Sen further discloses the method of claim 16, wherein optimizing the disaggregated memory performance comprises: pinning data to one of the local memory or the remote memory ([0171]). Claim 19 is rejected on a similar basis as claim 9.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103(a) are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 2-3, 5, 11-12, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Sen in view of Canepa (US 2022/0413708).
Regarding claim 2, Sen does not explicitly disclose the apparatus of claim 1, wherein the program is configured to assign a memory access pattern ID and a memory access pattern type of a virtual memory area descriptor newly created as a certain function is called from the application or the container. Regarding this limitation, Canepa discloses in Paragraphs [0121] and [0125] “[0121] PPVs may provide the SSD performance profile 701 with the ability to meet the QoS requirements of a plurality of different workloads in a multi-tenant environment and to determine if adding an additional application workload can meet both the existing workload QoS requirements along with the new application's QoS requirements. I/O requests also have a multitude of flavors, where each unique I/O request consumes a different amount of SSD resources, has different interactions with neighboring requests and has a different I/O completion time and latency profile. [0125] According to various embodiments, the PPV section of the performance profile includes a plurality of PPVs, where each PPV represents a particular workload with a particular I/O QoS, SSD state, and an I/O QoS regulation scheme to guarantee an I/O QoS across one or more applications. In some embodiments, the PPV workload data includes an identifier that indicates the represented workload is a specific application workload, facilitating a rapid match of an application workload to the PPV. In some embodiments, the PPV includes a scheme to map workload that differ from the workload described in the PPV onto the credit scheme described in the PPV.” Herein Canepa discloses when servicing a new workload that identifiers may be mapped between the requesting workload and associated defined quality-of-service (QoS) scheme to identify an access pattern with the application or container. In this manner, this enables to the system to manage simultaneous processing of workloads to achieve QoS targets. In this manner, it would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply identifiers of access patterns and type to a newly created workload, currently determined as analogous to the newly created as a certain function is called from the application or container, in order to facilitate I/O processing between a plurality of client applications (Canepa [0218]). Sen and Canepa are analogous art because they are from the same field of endeavor of managing memory access operations.
Regarding claim 3, Canepa further discloses the apparatus of claim 2, wherein the program is configured to assign an identical memory access pattern ID and an identical memory access pattern type to a virtual memory area descriptor for a memory area corresponding to one of a code area or a stack area ([0078] The file system layer 303 provides an abstraction to organize information into separate files, each of which may be identified using a unique name. Each file system type may define its own structures and logic rules used to manage these groups of information and their names. The file system layer 303 may be used to control how data is stored and retrieved. [0080] The kernel-space may further include a credit regulation and monitoring module 310 that is in communication with the block layer 304 of the storage stack 302 via a storage stack interface 314. The credit regulation and monitoring module 310 provides rate regulation for various I/O operations. In an embodiment, in order to rate regulate I/O operations, collect statistics, and the like, the credit regulation and monitoring module 310 needs to be a part of the storage stack 302 or have the ability to hook into the storage stack 302.). Herein Canepa discloses applying I/O regulations for executing applications as part of the storage stack in kernel-space. As best understood of the claim language, the stored regulations as associated with the storage stack are analogous to assigning an identical pattern ID and pattern type to a virtual memory area descriptor for an executing application as part of the memory area as well as storage or code area.
Regarding claim 5, Canepa further discloses the apparatus of claim 2, wherein the program is configured to assign an identical memory access pattern ID and an identical memory access pattern type to memory areas allocated at an identical code location ([0121] and [0125]). Herein Canepa discloses assigning identifiers per specific application workload which includes all corresponding memory allocations. As best understood of the claim language, the memory areas pertaining to the application workload are all identified by identical pattern IDs and pattern type as they are associated with the specific workload.
Regarding claim 11, Sen discloses, in the italicized portions, a method for assigning a memory access pattern ID and type, comprising: analyzing a memory area that is newly allocated as a certain function is called from an application or a container; and assigning a memory access pattern ID and a memory access pattern type of a virtual memory area descriptor for managing memory areas based on a result of analysis (Sen [0180] and [0185]). Herein Sen discloses identifying and associating access patterns with regions of allocated memory. Regarding the analyzing a memory area that is newly allocated and assigning of a memory access pattern ID limitations, Canepa discloses in Paragraphs [0121] and [0125] “[0121] PPVs may provide the SSD performance profile 701 with the ability to meet the QoS requirements of a plurality of different workloads in a multi-tenant environment and to determine if adding an additional application workload can meet both the existing workload QoS requirements along with the new application's QoS requirements. I/O requests also have a multitude of flavors, where each unique I/O request consumes a different amount of SSD resources, has different interactions with neighboring requests and has a different I/O completion time and latency profile. [0125] According to various embodiments, the PPV section of the performance profile includes a plurality of PPVs, where each PPV represents a particular workload with a particular I/O QoS, SSD state, and an I/O QoS regulation scheme to guarantee an I/O QoS across one or more applications. In some embodiments, the PPV workload data includes an identifier that indicates the represented workload is a specific application workload, facilitating a rapid match of an application workload to the PPV. In some embodiments, the PPV includes a scheme to map workload that differ from the workload described in the PPV onto the credit scheme described in the PPV.” Herein Canepa discloses when servicing a new workload that identifiers may be mapped between the requesting workload and associated defined quality-of-service (QoS) scheme to identify an access pattern with the application or container. In this manner, this enables to the system to manage simultaneous processing of workloads to achieve QoS targets. In this manner, it would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply identifiers of access patterns and type to a newly created workload, currently determined as analogous to the newly created as a certain function is called from the application or container, in order to facilitate I/O processing between a plurality of client applications (Canepa [0218]).
Regarding claim 12, Canepa further discloses the method of claim 11, wherein assigning the memory access pattern ID and the memory access pattern type comprises: assigning an identical memory access pattern ID and an identical memory access pattern type to a virtual memory area descriptor for a memory area corresponding to one of a code area or a stack area ([0078] and [0080]). Claim 12 is rejected on a similar basis as claim 3.
Regarding claim 14, Canepa further discloses the method of claim 11, wherein assigning the memory access pattern ID and the memory access pattern type comprises: assigning an identical memory access pattern ID and an identical memory access pattern type to memory areas allocated at an identical code location ([0121] and [0125]). Claim 14 is rejected on a similar basis as claim 5.
Claims 4 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Sen in view of Canepa and further in view of Hegdal et al. (US 2018/0165163).
Regarding claim 4, Sen and Canepa do not explicitly disclose the apparatus of claim 2, wherein the program is configured such that, as a process is duplicated to create a child process from a parent process through a fork function, a memory access pattern ID and a memory access pattern type identical to those of a duplication target virtual memory area descriptor are assigned to a virtual memory area descriptor that is duplicated together with the process. Regarding this limitation, Hegdal discloses in Paragraphs [0076-78] “[0076] In various embodiments, such system calls can be used to fork the application process when creating a child process from the parent process. In a preferred embodiment, a technique known as vfork can be used to fork the application process. [0077] In various embodiments, the child and parent application processes can share the same virtual address space, ASV, ASV pointer, and instruction pointer. [0078] Further, any required changes may be performed so that the child process is able to run in an identical state (using and referring to the same data structures, including memory bitmap and contents of the application memory) to the parent from the secondary ASV, once it is attached.” Herein Hegdal discloses that upon performing a fork operation on a parent process to create a child process, the child process can inherit an identical state as the parent process in order to replicate all attributes of the parent process and continue execution. In this manner, it would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention that when creating a child process as described by Hegdal, it would inherit all attributes including the memory access pattern ID and memory access pattern type as taught by Canepa in order to fully replicate all elements of the parent process to provide uninterrupted access to the application (Hegdal [0078]). Sen, Canepa, and Hegdal are analogous art because they are from the same field of endeavor of managing memory access operations.
Regarding claim 13, Sen and Canepa do not explicitly disclose the method of claim 11, wherein assigning the memory access pattern ID and the memory access pattern type comprises: as a process is duplicated to create a child process from a parent process through a fork function, assigning a memory access pattern ID and a memory access pattern type identical to those of a duplication target virtual memory area descriptor to a virtual memory area descriptor that is duplicated together with the process. Regarding this limitation, Hegdal discloses in Paragraphs [0076-78] that upon performing a fork operation on a parent process to create a child process, the child process can inherit an identical state as the parent process in order to replicate all attributes of the parent process and continue execution. Claim 13 is rejected on a similar basis as claim 4.
Claims 10 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Sen in view of in view of Kumar et al. (US 2018/0189188).
Regarding claim 10, Sen does not explicitly disclose the apparatus of claim 6, wherein the program is configured to, when the disaggregated memory is managed, fetch data in the remote memory, requested to be accessed, to the local memory and evict data in the local memory, expected to be less frequently used, to the remote memory when a space of the local memory is insufficient. Regarding this limitation, Kumar discloses in Paragraph [0077] “As with adding a new cacheline to a cache, before the new memory page can be added an existing memory page has to be evicted (if the near memory virtual address space allocated to the VM through which the memory access request is made is already full; if not, a page eviction is not necessary). In instances in which this near memory virtual address space is already full, a page eviction policy is implemented to determine what page to effect. For example, various types of well-known eviction policies may be used, such as a least recently used (LRU) eviction policy, a least frequently used (LFU), pseudo LRU, Bélády's Algorithm, etc. In one embodiment, access patterns to both the near memory virtual address space and the far memory address spaces are monitored, with the page to evict determined, at least in part, based on the observed access pattern of that page.” Herein Kumar discloses using LRU or LFU policies to evict memory pages from the new memory in order to make space available for pages retrieved from the far memory in response to an access request. Sen discloses techniques for addressing data eviction from memory tiers. In this manner, it would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize a LFU policy when make evictions as it is a known technique in the art for optimizing data storage placement. Sen and Kumar are analogous art because they are from the same field of endeavor of managing memory access operations.
Regarding claim 20, Sen does not explicitly disclose the method of claim 16, wherein managing the disaggregated memory comprises: fetching data in the remote memory, requested to be accessed, to the local memory, and evicting data in the local memory, expected to be less frequently used, to the remote memory when a space of the local memory is insufficient. Regarding this limitation, Kumar discloses in Paragraph [0077] using LRU or LFU policies to evict memory pages from the new memory in order to make space available for pages retrieved from the far memory in response to an access request. Claim 20 is rejected on a similar basis as claim 10.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Rawal et al. (US 10,884,637) – Column 2 wherein determining access patterns for applications is discussed.
Ahn et al. (US 2016/0085450) – Paragraph [0017] wherein page fault access in view of local and remote memory is discussed.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEXANDER J YOON whose telephone number is (408)918-7629. The examiner can normally be reached on Monday-Friday 8am-3pm ET. The examiner’s email is alexander.yoon2@uspto.gov.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jared Rutz can be reached on 571-272-5535. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALEXANDER YOON/
Examiner, Art Unit 2135
/JARED I RUTZ/ Supervisory Patent Examiner, Art Unit 2135