DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
1. REJECTIONS NOT BASED ON PRIOR ART
a. DEFICIENCIES IN THE CLAIMED SUBJECT MATTER
Claim Rejections - 35 USC ' 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-12 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
Claim 1 recites the limitations of “a first memory configured to communicate with a first external through a first interface” and “a second memory configured to communicate with the first external through a second interface” and “a memory controller configured to: update at least one of metadata related to a space-locality and a time-locality based on a result of comparing a first address and a second address sequentially input from a second external”. (emphasis added) It is unclear what a first and second “external” are; and further what they are “external” to. It appears that the first and second “external” could be referring to external devices, but it does not clear from the context of the claims. Proper correction and/or clarification are required. The dependent claims have a similar deficiency.
2. REJECTIONS BASED ON PRIOR ART
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Rejections - 35 USC ' 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-11 and 13-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Pierson (US 20120072674).
With respect to claim 1, the Pierson reference teaches a memory system comprising:
a first memory configured to communicate with a first external through a first interface; (e.g. fig. 2, L2/SRAM cache 220 connected via a bus to DSP core 210)
a second memory configured to communicate with the first external through a second interface; (e.g. fig. 2, shared memory 230 connected via a bus through cache 220 which connects to DSP core 210) and
a memory controller (e.g. fig. 4, memory controller 400) configured to:
update at least one of metadata related to a space-locality and a time-locality based on a result of comparing a first address and a second address sequentially input from a second external, (paragraph 37, where in the case of a typical prefetch hit (determined by prefetch address comparators 552, for example) that occurs in response to a memory request, the double-buffered prefetch unit 460 queues the prefetch program data for return to the requesting processor or cache. If the double-buffered prefetch unit 460 queues has no other return data queued, the double-buffered prefetch unit 460 can begin returning data in response to the memory request; and paragraph 36, where metadata returned with a fetch (such as returned memory access permissions) can be stored in additional or otherwise unused bits of the first and/or second portions of the double buffer [i.e. updated during a fetch/prefetch]; and paragraph 25, where the metadata provides information such as memory segmentation endpoints, physical addresses within sections of segmented memory, cacheability of requests, deferred privilege checking, access type (data, instruction or prefetch), [analogous to ‘one of metadata related to a space-locality and a time-locality’ as shown in example of paragraph 39 below] and request priority and elevated priority) and
prefetch, to the first memory, data determined based on the metadata, and wherein the prefetched data is some of data stored in the second memory. (paragraph 39, where the data for storing in the allocated cache line are sent such that the requested portion (e.g., the data that is addressed by the demand memory request) of the line returns first (the "critical" sub-line), which is then followed by the subsequent ("non-critical") sub-lines. A CPU (for example) that generated the demand request then "un-stalls" and resumes execution when the critical sub-line is fetched from the cache)
With respect to claim 2, the Pierson reference teaches the memory system according to claim 1, wherein the second memory has a lower priority than the first memory when accessed by the first external. (see fig. 2; and where shared memory 230 is lower priority than the L2 SRAM/cache; and paragraph 20)
With respect to claim 3, the Pierson reference teaches the memory system according to claim 2, wherein the memory controller comprises a prefetch information storage configured to store the metadata. (paragraph 25, where shared memory interface 420 also provides additional metadata to shared memory and/or external slaves. The metadata provides information such as memory segmentation endpoints, physical addresses within sections of segmented memory, cacheability of requests, deferred privilege checking, access type (data, instruction or prefetch), and request priority and elevated priority)
With respect to claim 4, the Pierson reference teaches the memory system according to claim 3, wherein the memory controller further comprises a page analyzer configured to update the metadata when the numbers of pages respectively corresponding to the first address and the second address are equal to each other. (paragraph 37, where in the case of a typical prefetch hit (determined by prefetch address comparators 552, for example) that occurs in response to a memory request, the double-buffered prefetch unit 460 queues the prefetch program data for return to the requesting processor or cache; and paragraph 36, where metadata returned with a fetch (such as returned memory access permissions) can be stored in additional or otherwise unused bits of the first and/or second portions of the double buffer [i.e. metadata is updated during a fetch/prefetch]; and paragraph 38, where the program fetch hits the program prefetch buffer, addresses are generated for the next 32, 64, 96, or 128 bytes, depending on whether the fetch hit the oldest, second oldest, second youngest or youngest slot (respectively) in the buffer [i.e. a hit corresponds to checking the number of pages])
With respect to claim 5, the Pierson reference teaches the memory system according to claim 4, wherein the metadata includes page history information, which is information about a variation between offset values for addresses corresponding to each of a plurality of pages, and information about a prefetch address determined based on a pattern of the variation for each of the plurality of pages. (paragraph 25, where shared memory interface 420 provides an interface and protocol system to handle memory requests for a shared memory such as shared memory 230. The shared memory interface 420 also provides additional metadata to shared memory and/or external slaves. The metadata provides information such as memory segmentation endpoints, physical addresses within sections of segmented memory, cacheability of requests, deferred privilege checking, access type (data, instruction or prefetch), and request priority and elevated priority; and paragraph 38, where the program fetch hits the program prefetch buffer, addresses are generated for the next 32, 64, 96, or 128 bytes, depending on whether the fetch hit the oldest, second oldest, second youngest or youngest slot (respectively) in the buffer [i.e. a hit corresponds to checking the history])
With respect to claim 6, the Pierson reference teaches the memory system according to claim 5, wherein the metadata further includes information about mapping of addresses sequentially input from the second external to associated addresses or information about mapping of pages respectively corresponding to the addresses sequentially input from the second external to associated pages. (paragraph 25, where shared memory interface 420 provides an interface and protocol system to handle memory requests for a shared memory such as shared memory 230. The shared memory interface 420 also provides additional metadata to shared memory and/or external slaves. The metadata provides information such as memory segmentation endpoints, physical addresses within sections of segmented memory, cacheability of requests, deferred privilege checking, access type (data, instruction or prefetch), and request priority and elevated priority; and paragraph 26, where unit for memory protection/address extension 430 performs address range lookups, memory protection checks, and address extensions by combining memory protection and address extension into a single, unified process. The memory protection checks determine what types of accesses are permitted on various address ranges within the memory controller 400's 32-bit logical address map)
With respect to claim 7, the Pierson reference teaches the memory system according to claim 6, wherein the memory controller further comprises: a space-locality prefetcher configured to prefetch, to the first memory, the data corresponding to a first prefetch address determined based on the pattern of the variation for a page corresponding to the second address. (paragraph 38, where program prefetch generator 554 generates program prefetch addresses in response to received addresses that are associated with memory requests. When a candidate program fetch misses the program prefetch buffer, addresses are generated for fetching the next 128 bytes following the last demand fetch address that missed the buffer. When a program fetch hits the program prefetch buffer, addresses are generated for the next 32, 64, 96, or 128 bytes, depending on whether the fetch hit the oldest, second oldest, second youngest or youngest slot (respectively) in the buffer)
With respect to claim 8, the Pierson reference teaches the memory system according to claim 7, wherein the memory controller further comprises a time-locality prefetcher configured to prefetch, to the first memory, the data corresponding to a third address associated with the second address based on the metadata. (paragraph 43, where program prefetch unit 460 heuristically determines the predicted next prefetch (PNP) by anticipating that the next prefetch hit will be for the slot "after" the slot for the current hit in the prefetch buffer. The slot "after" the currently hit slot is the next slot that follows the currently hit slot in accordance with the direction of the stream that is associated with the currently hit slot. The probabilities for correctly predicting the next prefetch are increased (over random estimates, for example) because (as disclosed herein) prefetch slots are allocated in a FIFO allocation order, and thus prefetch hits are more likely to occur in the order used for FIFO allocation (e.g., the FIFO allocation order). The program prefetch unit 460 uses FIFO counter 538 to point to the predicted next prefetch)
With respect to claim 9, the Pierson reference teaches the memory system according to claim 8, wherein the page analyzer is further configured to update the metadata when the numbers of pages respectively corresponding to the first address and the second address are different from each other. (paragraph 38, where program prefetch generator 554 generates program prefetch addresses in response to received addresses that are associated with memory requests. When a candidate program fetch misses the program prefetch buffer, addresses are generated for fetching the next 128 bytes following the last demand fetch address that missed the buffer. When a program fetch hits the program prefetch buffer, addresses are generated for the next 32, 64, 96, or 128 bytes, depending on whether the fetch hit the oldest, second oldest, second youngest or youngest slot (respectively) in the buffer [i.e. a miss corresponds to checking the number of pages])
With respect to claim 10, the Pierson reference teaches the memory system according to claim 8, wherein the page analyzer is further configured to update the metadata when the numbers of pages respectively corresponding to the first address and the second address are different from each other. (paragraph 38, where program prefetch generator 554 generates program prefetch addresses in response to received addresses that are associated with memory requests. When a candidate program fetch misses the program prefetch buffer, addresses are generated for fetching the next 128 bytes following the last demand fetch address that missed the buffer. When a program fetch hits the program prefetch buffer, addresses are generated for the next 32, 64, 96, or 128 bytes, depending on whether the fetch hit the oldest, second oldest, second youngest or youngest slot (respectively) in the buffer [i.e. a miss corresponds to checking the number of pages])
With respect to claim 11, the Pierson reference teaches the memory system according to claim 6, wherein the memory controller further comprises a space-locality prefetcher configured to prefetch, to the first memory, the data corresponding to a first prefetch address determined based on the pattern of the variation for a page corresponding to the second address and the data corresponding to a second prefetch address determined based on the pattern of the variation for a page associated with the page corresponding to the second address. (paragraph 38, where program prefetch generator 554 generates program prefetch addresses in response to received addresses that are associated with memory requests. When a candidate program fetch misses the program prefetch buffer, addresses are generated for fetching the next 128 bytes following the last demand fetch address that missed the buffer. When a program fetch hits the program prefetch buffer, addresses are generated for the next 32, 64, 96, or 128 bytes, depending on whether the fetch hit the oldest, second oldest, second youngest or youngest slot (respectively) in the buffer)
Claims 13-20 are the host device implementation of claims 1-11 noted above, and rejected under a similar rationale. The Examiner notes the limitations of “a main memory including a plurality of pages; a cache memory configured to cache a part of data stored in the main memory; and a processor configured to:” perform the steps above are discussed by fig. 2, shared memory 230, L2 cache 220, and DSP core 210; and further detailed in paragraphs 19-21.
Claim Rejections - 35 USC ' 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Pierson (US 20120072674) in view of Elzur (US 20220335563).
With respect to claim 12, the Pierson reference does not explicitly teach
the memory system according to claim 1, wherein: the first interface includes a dual inline memory module (DIMM) interface, and the second interface includes a compute express link (CXL) interface.
The Elzur reference teaches it is conventional to have the first interface includes a dual inline memory module (DIMM) interface, and the second interface includes a compute express link (CXL) interface. (see fig. 4; and paragraph 27, where switch 405 can provide communication among GPU compute 102, memory, 103, and a memory device or memory pool of external memory (e.g., dual inline memory modules (DIMMs) 412 via interface 410. Interface 402, interface 406, and/or interface 410 can utilize one or more of the following standards and technology: Peripheral Component Interconnect express (PCIe), Compute Express Link (CXL), serializer de-serializer (SerDes), silicon photonics or optical interface, or other protocol or connectivity technology)
It would have been obvious to a person of ordinary skill in the art before the claimed invention was effectively filed to modify the Pierson reference to have wherein the first interface includes a dual inline memory module (DIMM) interface, and the second interface includes a compute express link (CXL) interface, as taught by the Elzur reference.
The suggestion/motivation for doing so would have been to provide a variety types of memory and different connectivity to achieve a high speed interface. (Elzur, paragraphs 27-28)
Therefore it would have been obvious to combine the Pierson and Elzur references for the benefits shown above to obtain the invention as specified in the claim.
3. RELEVANT ART CITED BY THE EXAMINER
The following prior art made of record and not relied upon is cited to establish the level of skill in the applicant's art and those arts considered reasonably pertinent to applicant's disclosure. See MPEP 707.05(c).
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. These references include:
Heddes (US 20150339237), which teaches memory controllers employing memory capacity and/or bandwidth compression with next read address prefetching, and related processor-based systems and methods are disclosed. In certain aspects, memory controllers are employed that can provide memory capacity compression. In certain aspects disclosed herein, a next read address prefetching scheme can be used by a memory controller to speculatively prefetch data from system memory at another address beyond the currently accessed address. Thus, when memory data is addressed in the compressed memory, if the next read address is stored in metadata associated with the memory block at the accessed address, the memory data at the next read address can be prefetched by the memory controller to be available in case a subsequent read operation issued by a central processing unit (CPU) has been prefetched by the memory controller.
4. CLOSING COMMENTS
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PRASITH THAMMAVONG whose telephone number is (571) 270-1040. The examiner can normally be reached Monday - Friday 12-8 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Arpan Savla can be reached on (571) 272-1077. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PRASITH THAMMAVONG/
Primary Examiner, Art Unit 2137