DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 4, 6, 8-11, 14, 16, 18 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable by Bert et al. (U.S. 2022/0012176 A1) in view of Jun et al. (U.S. 2014/0351550 A1).
Regarding Claim 1, Bert discloses a system (Bert, Fig. 1, [0004], “a computer system” comprising:
a memory (Bert, Fig. 1, [0006] “a memory device”) ;
one or more processors coupled with the memory and configured to execute a plurality of threads (Bert, [0046] “a processor 117 (processing device) configured to execute instructions stored in local memory 119 for performing the operations” and [0047] “A thread refers to a sequence of executable instructions that can be performed by a processing device in a context which is separate from contexts of other threads” Bert teaches a processor couple with a memory and configured to execute a plurality of threads;
a memory controller (Bert, [0033] “a memory controller (e.g., NVDIMM controller) configured to:
receive a request to allocate memory space for a first thread of the plurality of threads (Bert, [0051] “At operation 210, the processing logic receives a request to write a first data item associated with a first thread to a memory device of a memory sub-system. FIG. 4A” and Fig. 2, [0058] “thread caching component 113 can determine that a memory space criterion associated with write block 414 is satisfied) and [0062] “thread caching component 113 can determine a threshold number of memory pages allocated to thread 3 are stored at cache 410 (e.g., at write block 414…” Bert teaches receive a request to allocated memory space for a first thread (e.g., thread 3) of the plurality of threads;
select a first memory page from a plurality of memory pages in the memory (Bert, [0062] “thread caching component 113 can determine a threshold number of memory pages allocated to thread 3 are stored at cache 410 (e.g., at write block 414…” and [0063] At operation 260, the processing logic can copy the first memory page and each of the set of second memory pages associated with the first thread, the first memory page can refer to a memory page associated with thread 3 that is stored at write block 414” Bert teaches select a first memory page from a plurality of memory pages in the memory;
determine whether the first memory page is currently allocated (Bert, Fig. 4A, [0062] “thread caching component 113 can determine a threshold number of memory pages allocated to thread 3 are stored at cache 410 (e.g., at write block 414…” and [0063] At operation 260, the processing logic can copy the first memory page and each of the set of second memory pages associated with the first thread, the first memory page can refer to a memory page associated with thread 3 that is stored at write block 414” Bert teaches determine whether the first memory page is currently allocated (at write block 414);
based on a determination that the first memory page is currently allocated, select a different memory page from the plurality of memory pages for the request (Bert, Fig. 4B, [0063] “At operation 260, the first memory page can refer to a memory page associated with thread 3 that is stored at write block 414 and each of the set of second memory pages can refer to each memory page associated with thread 3 that is stored at first data compaction block 416A” Bert teaches based on a determination the 1st memory page is currently allocated at block 414, select a second memory page for the request (write to block 416A, Fig. 4B).
However, Bert does not explicitly teach based on a determination that the first memory page is not currently allocated, allocate the first memory page for the first thread.
Jun teaches based on a determination that the first memory page is not currently allocated, allocate the first memory page for the first thread (Jun, [0048] “When a memory request, such as "alloc," is made by a specific thread of the DDS middleware, the memory area management unit 120 allocates the foremost memory page of the memory pages that are included in the memory chunk 130 and have not been allocated to the threads of the DDS middleware, to the specific thread. In this case, the attribute information of the memory page allocated to the specific thread and thread information is registered with the page management unit 200” Jun teaches based on a determination that the first memory page (e.g., the foremost memory page) is not allocated (not currently allocated) to a specific thread (e.g., a first thread) by a memory area management unit, allocate the first memory page for the first thread.
Bert and Jun are combinable because they are from the same field of endeavor, system and method for image processing and try to solve similar problems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made for modifying the method of Bert to combine with (as taught by Jun) in order to determine a first memory page is not allocate, then allocate the first memory page to a first thread because Jun can provide based on a determination that the first memory page (e.g., the foremost memory page) is not allocated (not currently allocated) to a specific thread (e.g., a first thread) by a memory area management unit, allocate the first memory page for the first thread (Jun, [0048]). Doing so, it may provide preventing memory contention that may occur between the threads of the DDS middleware and also more efficiently allocating or freeing memory on a memory page basis (Jun, [0012]).
Regarding Claim 4, a combination of Bert and Jun discloses the system of claim 1, wherein the memory controller, to allocate the first memory page for the first thread, is configured to: update a used bit associated with the first memory page to indicate that the first memory page is currently allocated (Bert, [0057] “Each entry of the memory page data structure can further include a validity value (e.g., a validity bit) to indicate whether the data stored at a particular memory page is valid or invalid, in response to copying each memory page associated with thread 1 to host space 412. Thread caching component 113 can set a validity value in the generated entry to indicate the data stored at host space 412 is valid (e.g., set the validity bit to 1)” Bert teaches update a used bit associated with memory page, e.g., the first memory page with thread 1, set the validity bit to 1, indicate that first memory page is currently allocated (data stored at host space).
Regarding Claim 6, a combination of Bert and Jun discloses the system of claim 1, wherein the memory controller is further configured to: deallocate the first memory page by updating a used bit associated with the first memory page to indicate that the first memory page is not currently allocated (Bert, [0057] Each entry of the memory page data structure can further include a validity value (e.g., a validity bit) to indicate whether the data stored at a particular memory page is valid or invalid, in response to copying each memory page associated with thread 1 to host space 412, thread caching component 113 can generate an entry in the memory page data structure associated with an address of host space 412 that stores the copied data. Thread caching component 113 can further identify an entry corresponding to each memory page associated with thread 1 of write block 414 and modify the validity value in each identified entry to indicate the copied data is invalid in write block 414 (e.g., set the validity bit to 0) Bert teaches deallocate the first memory page by updating a used bit associated with the 1st memory page, set the validity bit to 0 to indicate the 1st memory page is not currently allocated.
Regarding Claim 8, a combination of Bert and Jun discloses the system of claim 1, wherein the one or more processors are configured to execute the plurality of threads in parallel (Bert, [0051] FIG. 2, method 200 to cache memory pages for parallel independent threads at a memory device. The method 200 can be performed by processing logic that can include hardware (e.g., processing device…) and [0078] “[0078] Processing device 502 represents one or more general-purpose processing devices such as a microprocessor, a central processing unit” Bert teaches a processor (CPU) executes the plurality threads in parallel.
Regarding Claim 9, a combination of Bert and Jun discloses the system of claim 1, wherein the memory controller is further configured to: receive a request to allocate memory space for a set of threads in the plurality of threads, the set of threads including the first thread and a second thread; and collaboratively identify and allocate the first memory page for the first thread and a second memory page for the second thread (Bert, [0058] “Thread caching component 113 can determine that a memory space criterion is satisfied in response to determining a threshold number of memory pages of a block store valid or invalid data” and [0054] “At operation 230, the processing logic can write the first data item to the first memory page. In response to identifying an available memory page of write block 414, thread caching component 113 can write the data associated with thread 1 to the available memory page (indicated as “T1” in FIG. 4A) and [0055] “thread caching component 113 can allocate four memory pages of cache 410 to each thread (e.g., thread 1, thread 2, thread 3, etc.) Bert teaches receive a request to allocate memory space for a set of thread (e.g., thread 1, thread 2, thread 3, etc.) and collaboratively indemnity and allocate the 1st memory page for the 1st thread, 2nd memory page for the 2nd thread.
Regarding Claim 10, a combination of Bert and Jun discloses a method (Bert, [0050] “a method 200”) comprising:
executing a plurality of threads in parallel using one or more processors;
detecting a request to allocate memory space of a memory associated with the one or more processors, from a first thread of the plurality of threads;
selecting a first memory page from a plurality of memory pages in the memory;
determining whether the first memory page is currently allocated; and
based on a determination that the first memory page is not currently allocated, allocating the first memory page for the request of the first thread.
Claim 10 is substantially similar to claim 1 is rejected based on similar analyses.
Regarding Claim 11, a combination of Bert and Jun discloses the method of claim 10, further comprising: based on a determination that the first memory page is currently allocated, selecting a different memory page from the plurality of memory pages for the request.
Claim 11 is substantially similar to claim 1 is rejected based on similar analyses.
Regarding Claim 14, Bert discloses the method of claim 10, wherein allocating the first memory page for the first thread comprises updating a used bit associated with the first memory page to indicate that the first memory page is currently allocated.
Claim 14 is substantially similar to claim 4 is rejected based on similar analyses.
Regarding Claim 16, a combination of Bert and Jun discloses the method of claim 10, further comprising: deallocating the first memory page by updating a used bit associated with the first memory page to indicate that the first memory page is not currently allocated.
Claim 16 is substantially similar to claim 6 is rejected based on similar analyses.
Regarding Claim 18, a combination of Bert and Jun discloses the method of claim 10, further comprising: receiving a request to allocate memory space for a set of threads in the plurality of threads, the set of threads including the first thread and a second thread; and collaboratively identifying and allocating the first memory page for the first thread and a second memory page for the second thread.
Claim 18 is substantially similar to claim 9 is rejected based on similar analyses.
Regarding Claim 19, a combination of Bert and Jun discloses a device (Bert, [0077] “Processing device 502”) comprising:
one or more processors coupled with a memory and configured to execute a plurality of threads; and
a memory controller configured to:
receive a request to allocate memory space for a first thread of the plurality of threads;
select a first memory page from a plurality of memory pages in the memory;
determine whether the first memory page is currently allocated;
based on a determination that the first memory page is not currently allocated, allocate the first memory page for the first thread; and
based on a determination that the first memory page is currently allocated, select a different memory page from the plurality of memory pages for the request.
Claim 19 is substantially similar to claim 1 is rejected based on similar analyses.
Regarding Claim 20, a combination of Bert and Jun discloses the device of claim 19, wherein the memory controller is configured to select the first memory page by randomly selecting an identifier of the first memory page from a plurality of identifiers of the plurality of memory pages.
Claim 20 is substantially similar to claim 2 is rejected based on similar analyses.
Claims 2, 12, are rejected under 35 U.S.C. 103 as being unpatentable by Bert et al. (U.S. 2022/0012176 A1) in view of Jun et al. (U.S. 2014/0351550 A1) and further in view of Sahita et al. (U.S. 2022/0214909 A1).
Regarding Claim 2, the system of claim 1, a combination of Bert and Evan does not explicitly teach wherein the memory controller is configured to select the first memory page by randomly selecting an identifier of the first memory page from a plurality of identifiers of the plurality of memory pages.
However, Sahita teaches select the first memory page by randomly selecting an identifier of the first memory page from a plurality of identifiers of the plurality of memory pages (Sahita, [0185] “a processor configured to be communicatively coupled to a memory and the processor is to execute instructions: select a first key identifier for a first virtual machine, to set an integrity mode for the first key identifier and assign the first key identifier to the first memory page” Sahita teaches a processor selects an identifier and assign the first key identifier to the first memory page.
Bert, Jun and Sahita are combinable because they are from the same field of endeavor, system and method for image processing and try to solve similar problems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made for modifying the method of Bert to combine with selecting an identifier to assign to a first memory page (as taught by Sahita) in order to select an identifier to assign to a first memory page because Sahita can provide a processor selects an identifier and assign the first key identifier to the first memory page (Sahita, [0007]). Doing so, it may provide for combining hypervisor-managed linear address translation (HLAT) and protection of memory confidentiality and integrity (Sahita, [0023]).
Regarding Claim 12, a combination of Bert, Jun and Sahita discloses the method of claim 10, wherein selecting the first memory page includes randomly selecting an identifier of the first memory page from a plurality of identifiers of the plurality of memory pages.
Claim 12 is substantially similar to claim 2 is rejected based on similar analyses.
Claims 5, 15 are rejected under 35 U.S.C. 103 as being unpatentable by Bert et al. (U.S. 2022/0012176 A1) in view of Jun et al. (U.S. 2014/0351550 A1) and further in view of Giroux et al. (U.S. 2014/0310484A1)
Regarding Claim 5, the system of claim 4, a combination of Bert and Evan does not explicitly teach wherein the memory controller, to allocate the first memory page to the first thread, is configured to: execute an atomic operation to prevent allocation of the first memory page during parallel execution of other threads of the plurality of threads until the updating of the used bit is completed.
Giroux teaches execute an atomic operation to prevent allocation of the first memory page during parallel execution of other threads of the plurality of threads until the updating of the used bit is completed (Giroux, [0003] “GPUs generally have a parallel architecture allowing a computing task…known as threads. The threads may then execute in parallel as a group” and [0089] At block 802, a request to access a portion of memory is received. The request may include a memory operation which may be a load, store, or an atomic read/write/modify operation” and [0090] At block 804, it is determined whether the address corresponds to a portion of thread local memory” and [0091] “whether the first address corresponds to a local portion of memory is based on a bit or bits of the first address, the address is within local memory allocated to a thread of a plurality of threads may be determined based on a specific value of the top three bits of the address” Giroux teaches execute an atomic operation (read/write/modify operation) by updating bit or bits of the first address in a first memory page within local memory allocated to the thread (e.g., first thread) based on a specific value of the top three bits of the address.
Bert, Jun and Giroux are combinable because they are from the same field of endeavor, system and method for image processing and try to solve similar problems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made for modifying the method of Bert to combine with an atomic operation (as taught by Giroux) in order to execute an atomic operation to prevent allocation of the first memory page during parallel execution of other threads because Giroux can provide execute an atomic operation (read/write/modify operation) by updating bit or bits of the first address in a first memory page within local memory allocated to the thread (e.g., first thread) based on a specific value of the top three bits of the address (Giroux, [0003], [0089]-[0091]). Doing so, it may allow translation in the path of dereferencing memory thereby allowing data for a given offset of each of a plurality of threads to be adjacent and contiguous in memory. The global memory is thereby organized (e.g., swizzled) in a manner suitable for use as thread stack memory (Giroux, [0007]).
Regarding Claim 15, a combination of Bert, Jun and Giroux discloses the method of claim 14, further comprising: executing an atomic operation to prevent allocation of the first memory page in other threads of the plurality of threads until the updating of the used bit is completed.
Claim 15 is substantially similar to claim 5 is rejected based on similar analyses.
Claims 7, 17 are rejected under 35 U.S.C. 103 as being unpatentable by Bert et al. (U.S. 2022/0012176 A1) in view of Jun et al. (U.S. 2014/0351550 A1) and further in view of Schneider et al. (U.S. 2008/0209153A1)
Regarding Claim 7, the system of claim 1, a combination of Bert and Evan does not explicitly teach wherein the memory controller is further configured to: store a plurality of used bits associated with the plurality of memory pages in a bitmap; and execute a single memory read operation to access multiple used bits associated with multiple memory pages from the bitmap.
However, Schneider teach store a plurality of used bits associated with the plurality of memory pages in a bitmap; and execute a single memory read operation to access multiple used bits associated with multiple memory pages from the bitmap (Schneider, [0055] FIG. 3B, the metadata structure 350 includes a bitmap 360. The bitmap 360 may have as many bits as there are blocks in the memory page, and each bit in the bitmap 360 may represent one of the memory blocks. If a bit has a first value (e.g., a 1), then the corresponding memory block is free (unallocated). If the bit has a second value (e.g., a 0), then the corresponding memory block has been allocated, only a single bit is required to control the allocation of, and to otherwise manage, a single memory block” Schneider teaches store used bits associated with memory pages in a bitmap and only a single bit is required (execute a single memory read operation) to access memory pages, e.g., bit has a value 1, is unallocated, bit has a value 0, is allocated.
Bert, Jun and Schneider are combinable because they are from the same field of endeavor, system and method for image processing and try to solve similar problems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made for modifying the method of Bert to combine with bits associated with memory pages, are stored in a bitmap (as taught by Schneider) in order to apply bits associated with memory pages, are stored in a bitmap because Schneider can provide store used bits associated with memory pages in a bitmap and only a single bit is required (execute a single memory read operation) to access memory pages, e.g., bit has a value 1, is unallocated, bit has a value 0, is allocated (Schneider, [0055]). Doing so, it may provide ensure that memory blocks and data structures do not have portions that have been swapped out to secondary memory, and portions that remain in main memory. As a result, the frequency with which memory swaps occur may be reduced, and system performance may be improved (Schneider, [0041]).
Regarding Claim 17, a combination of Bert, Jun and Schneider discloses the method of claim 10, further comprising: storing a plurality of used bits associated with the plurality of memory pages in a bitmap; and executing a single memory read operation to access multiple used bits associated with multiple memory pages from the bitmap.
Claim 17 is substantially similar to claim 7 is rejected based on similar analyses.
Allowable Subject Matter
Dependent claims 3, 13 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding to independent claims 1, 10, 19 the closest prior art references the examiner found are Sano (U.S. 2003/0219162 A1) in view of Richter et al. (U.S. 2023/0230283 A1) Bert et al. (U.S. 2022/0012176 A1) in view of Jun et al. (U.S. 2014/0351550 A1) have been made of record as teaching: receive a request to allocate memory space for a first thread of the plurality of threads (Bert, [0051], [0058]); select a first memory page from a plurality of memory pages in the memory (Bert, [0062], [0063]); determine whether the first memory page is currently allocated (Bert, Fig. 4A, [0062], [0063]); based on a determination that the first memory page is currently allocated, select a different memory page from the plurality of memory pages for the request (Bert, Fig. 4B, [0063]); based on a determination that the first memory page is not currently allocated, allocate the first memory page for the first thread (Jun, [0048]), recited in claims 1, 10, 19.
However, the art of record did not teach or suggest the claim taken as a whole and particular the limitation pertaining
wherein the memory controller is further configured to: identify a last memory page allocated for the first thread prior to the request; and
select a cluster corresponding to a subset of the plurality of memory pages based on the last memory page, wherein randomly selecting the identifier mapped to the first memory page is from the selected cluster recited in claims 3, 13.
Any comments considered necessary by applicant must be submitted no later than the payment of the issue fee and, to avoid processing delays, should preferably accompany the issue fee. Such submissions should be clearly labeled “Comments on Statement of Reasons for Allowance”.
Conclusion
The prior arts made of record and not relied upon are considered pertinent to applicant's disclosure Wolford et al. (U.S. 2011/0055495 A1) and Goodman et al. (U.S. 2022/0050790 A1).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KHOA VU whose telephone number is (571)272-5994. The examiner can normally be reached 8:00- 4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at 571-272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611
/KHOA VU/Examiner, Art Unit 2611