/DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-11, 13-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yeung et al. 11126550 herein Yeung in view of Sankaran et al. 10257264 herein Sankaran.
Per claim 1, Yeung discloses: a memory controller; a circuit board having memory mounted to the circuit board; and a processor core configured to: (fig. 9; fig. 12; col. 19 line 50-65 fig. 1a single independent IC chip 902 having multiple process cores 910-912, each with access to cache and a cache controller 914-916, respectively.) generate a memory request for data stored in the mounted memory; (fig. 13, 13216; col. 24 lines 30-35; At 1316, method 1300 generates a system memory access request (e.g., a read) having less than 128-bytes of data. The multicore chip facilitates fetch sizes smaller than a standard DRAM page, which is 128-bytes. Accordingly, the access request can be a single cache block (e.g., 64-bytes), half a cache block (e.g. 32 bytes), or even smaller numbers of data: e.g., 16 bytes, 8 bytes, 4 bytes, 1 byte, etc; col. 24, lines 15-20; At 1304, method 1300 can access cache memory to satisfy a memory requirement of the process thread. At 1306, a determination is made as to whether the cache access results in a cache hit. If a cache hit occurs, method 1300 can proceed to 1308. Otherwise, method 1300 proceeds to 1316) and in response to transmission of the memory request, cause a subset of a cache block, of data, having bit values that correspond to the selective transfer criteria, to be returned to the processor core (fig. 13, 1316-1318; col. 24 lines 48-51; At 1324, method 1300 can comprise obtaining less than 128 bytes from system memory in response to the memory request of reference number 1318; fig. 9, col. 19 line 50-65; Upon a cache miss, a request to main memory by way of an on-chip networking architecture 920 is issued. The request can be for a full cache block, in some embodiments, portions of a cache block in other embodiments (e.g., a minimum fetch size, such as 8 bytes or other suitable minimum fetch size), or multiple cache blocks. Memory controller associated with resistive main memory 930 can return data associated with the memory requests).
Yeung discloses a read request for a partial block but does not specifically disclose a request with a transfer criterion.: cause the circuit board to compare respective values of data bits in a cache block of data to selective transfer criteria by transmitting the memory request to the circuit board with selective transfer criteria.
However, Sankaran discloses: cause the circuit board to compare respective values of data bits in a cache block of data to selective transfer criteria by transmitting the memory request to the circuit board with selective transfer criteria (Abstract; the webserver receives a request sent by a requestor having requestor parameters including at least a requestor ID and a user ID, identifies a predictive cache block set;).
It would have been obvious to one having ordinary skill in the art at the effective filing date of the invention to combine the teachings of Yeung and Sankaran to limit the data request. Sankaran improves data latency (col. 1 lines 40-57; system for reducing data center latency includes a webserver having a processor, a memory system controller coupled to the processor, external memory coupled to the memory system controller, cache memory coupled to the memory system controller, the cache memory including a plurality of cache blocks, where each cache block includes provider parameters and at least one user identifier (ID) and program memory coupled to the memory system controller including code segments executable by the processor).
Per claim 2, Yeung discloses: wherein the data stored in the mounted memory is stored as a plurality of cache blocks that includes the cache block of data, wherein each of the plurality of cache blocks is configured as storing a first amount of data, and wherein the subset of the cache block of data comprises a second amount of data that is smaller than the first amount of data (col. 24 lines 30-35; At 1316, method 1300 generates a system memory access request (e.g., a read) having less than 128-bytes of data. The multicore chip facilitates fetch sizes smaller than a standard DRAM page, which is 128-bytes. Accordingly, the access request can be a single cache block (e.g., 64-bytes), half a cache block (e.g. 32 bytes), or even smaller numbers of data: e.g., 16 bytes, 8 bytes, 4 bytes, 1 byte, etc; col. 24, lines 15-20;).
Per claim 3, Yeung discloses: wherein in response to the processor core transmitting the memory request to the circuit board, the circuit board transmits the cache block of the data to the memory controller and the memory controller transmits the subset of the cache block of data to the processor core (fig. 9, col. 19 line 50-65; Upon a cache miss, a request to main memory by way of an on-chip networking architecture 920 is issued. The request can be for a full cache block, in some embodiments, portions of a cache block in other embodiments (e.g., a minimum fetch size, such as 8 bytes or other suitable minimum fetch size), or multiple cache blocks. Memory controller associated with resistive main memory 930 can return data associated with the memory requests).
Per claim 4, Yeung discloses: further comprising a cache system, wherein the memory controller is configured to write the subset of the cache block of data to a cache of the cache system (fig. 9 comp. 914-916) for subsequent access by the processor core e (fig. 9, col. 19 line 50-65; Upon a cache miss, a request to main memory by way of an on-chip networking architecture 920 is issued. The request can be for a full cache block, in some embodiments, portions of a cache block in other embodiments (e.g., a minimum fetch size, such as 8 bytes or other suitable minimum fetch size), or multiple cache blocks. Memory controller associated with resistive main memory 930 can return data associated with the memory requests).
Per claim 5, Yeung discloses: wherein in response to the processor core transmitting the memory request to the circuit board, the circuit board transmits the subset of the cache block of data to the memory controller for transmission to the processor core (fig. 9, col. 19 line 50-65; Upon a cache miss, a request to main memory by way of an on-chip networking architecture 920 is issued. The request can be for a full cache block, in some embodiments, portions of a cache block in other embodiments (e.g., a minimum fetch size, such as 8 bytes or other suitable minimum fetch size), or multiple cache blocks. Memory controller associated with resistive main memory 930 can return data associated with the memory requests).
Per claim 6, Yeung discloses: wherein the processor core is further configured to allocate the mounted memory for a computational task that involves accessing the subset of the cache block of data by defining a range of memory addresses as corresponding to a selective access response and the circuit board is caused to return the subset of the cache block of data to the processor core in response to the memory request specifying a memory address included in the range of memory addresses (fig. 9, col. 19 line 50-65; Upon a cache miss, a request to main memory by way of an on-chip networking architecture 920 is issued. The request can be for a full cache block, in some embodiments, portions of a cache block in other embodiments (e.g., a minimum fetch size, such as 8 bytes or other suitable minimum fetch size), or multiple cache blocks. Memory controller associated with resistive main memory 930 can return data associated with the memory requests; (the examiner equates the range of addresses as a subset of the cache. It is obvious to one of ordinary skill in the art that the pages within the block have corresponding addresses)).
Per claim 7, Yeung discloses: wherein the generation of the memory request further includes embedding a hint in the memory request that defines the subset of the cache block of data to be returned in response to the memory request instead of an entirety of the cache block of data (col. 24 lines 30-35; At 1316, method 1300 generates a system memory access request (e.g., a read) having less than 128-bytes of data. The multicore chip facilitates fetch sizes smaller than a standard DRAM page, which is 128-bytes. Accordingly, the access request can be a single cache block (e.g., 64-bytes), half a cache block (e.g. 32 bytes), or even smaller numbers of data: e.g., 16 bytes, 8 bytes, 4 bytes, 1 byte, etc; col. 24, lines 15-20;; the examiner notes that the hint is merely a function that specifies a subset of a cache block. See applicants spec ¶0012 ).
Per claim 8, the combined teaching of Yeung and Sankaran discloses: wherein the processor core is configured to generate the memory request by embedding selective transfer criteria in the memory request, wherein the selective transfer criteria causes the circuit board to transmit the cache block of data to the memory controller and causes the memory controller to: analyze the cache block of data; identify a subset of data bits that satisfy the selective transfer criteria by comparing respective values of data bits in the cache block of data to the selective transfer criteria; and output the subset of data bits that satisfy the selective transfer criteria as the subset of the cache block of data (Yeung; col. 24 lines 30-35; At 1316, method 1300 generates a system memory access request (e.g., a read) having less than 128-bytes of data. The multicore chip facilitates fetch sizes smaller than a standard DRAM page, which is 128-bytes. Accordingly, the access request can be a single cache block (e.g., 64-bytes), half a cache block (e.g. 32 bytes), or even smaller numbers of data: e.g., 16 bytes, 8 bytes, 4 bytes, 1 byte, etc; col. 24, lines 15-20; the examiner notes that the embedded selective criteria is interpreted as causing a selection of bits as disclosed by Sankaran: Abstract; the webserver receives a request sent by a requestor having requestor parameters including at least a requestor ID and a user ID, identifies a predictive cache block set).
Per claim 9, Yeung discloses: wherein the memory controller is configured to output the subset of data bits that satisfy the selective transfer criteria as the subset of the cache block of data by communicating the subset of the cache block of data to the processor core (fig. 9, col. 19 line 50-65; The request can be for a full cache block, in some embodiments, portions of a cache block in other embodiments (e.g., a minimum fetch size, such as 8 bytes or other suitable minimum fetch size), or multiple cache blocks. Memory controller associated with resistive main memory 930 can return data associated with the memory requests).
Per claim 10, Yeung discloses: further comprising a cache system, wherein the memory controller is configured to output the subset of data bits that satisfy the selective transfer criteria as the subset of the cache block of data by writing the subset of the cache block of data to a cache of the cache system (fig. 9, col. 19 line 50-65; Upon a cache miss, a request to main memory by way of an on-chip networking architecture 920 is issued. The request can be for a full cache block, in some embodiments, portions of a cache block in other embodiments (e.g., a minimum fetch size, such as 8 bytes or other suitable minimum fetch size), or multiple cache blocks. Memory controller associated with resistive main memory 930 can return data associated with the memory requests).
Per claim 11, Yeung discloses: wherein the selective transfer criteria specifies an amount of bits in the cache block that are to be included in the subset of the cache block of data (col. 24 lines 30-35; At 1316, method 1300 generates a system memory access request (e.g., a read) having less than 128-bytes of data. The multicore chip facilitates fetch sizes smaller than a standard DRAM page, which is 128-bytes. Accordingly, the access request can be a single cache block (e.g., 64-bytes), half a cache block (e.g. 32 bytes), or even smaller numbers of data: e.g., 16 bytes, 8 bytes, 4 bytes, 1 byte, etc).
Per claim 13, Yeung discloses: further comprising a cache system, wherein the generation of the memory request further includes embedding selective transfer criteria in the memory request that causes the circuit board to transmit the cache block of data to the cache system and causes the cache system to: analyze the cache block of data; identify a subset of data bits that satisfy the selective transfer criteria; (col. 24 lines 30-35; At 1316, method 1300 generates a system memory access request (e.g., a read) having less than 128-bytes of data. The multicore chip facilitates fetch sizes smaller than a standard DRAM page, which is 128-bytes. Accordingly, the access request can be a single cache block (e.g., 64-bytes), half a cache block (e.g. 32 bytes), or even smaller numbers of data: e.g., 16 bytes, 8 bytes, 4 bytes, 1 byte, etc) and write the subset of data bits that satisfy the selective transfer criteria to a cache of the cache system as the subset of the cache block of data for subsequent access by the processor core (col. 24 lines 30-35; At 1316, method 1300 generates a system memory access request (e.g., a read) having less than 128-bytes of data. The multicore chip facilitates fetch sizes smaller than a standard DRAM page, which is 128-bytes. Accordingly, the access request can be a single cache block (e.g., 64-bytes), half a cache block (e.g. 32 bytes), or even smaller numbers of data: e.g., 16 bytes, 8 bytes, 4 bytes, 1 byte, etc).
Per claim 14, Yeung discloses: wherein the cache system is further caused to invalidate other bits in the cache block of data that fail to satisfy the selective transfer criteria (col. 24 lines 30-35; At 1316, method 1300 generates a system memory access request (e.g., a read) having less than 128-bytes of data. The multicore chip facilitates fetch sizes smaller than a standard DRAM page, which is 128-bytes. Accordingly, the access request can be a single cache block (e.g., 64-bytes), half a cache block (e.g. 32 bytes), or even smaller numbers of data: e.g., 16 bytes, 8 bytes, 4 bytes, 1 byte, etc; the examiner notes that the claim nor specification define how the bits in the cache block are invalidated. The examiner interprets the invalidated bits as any bit outside of the selected bits.).
Claims 15-19 are the device claims corresponding to the system claim 1-14 and are rejected under the same reasons set forth in connection with the rejection of claim 1-14.
Claim 20 is the device claim corresponding to the system claim 1 and is rejected under the same reasons set forth in connection with the rejection of claim 1. The examiner notes that the limitation “without returning a portion of the cache block of data not included in the subset of the cache block data,” is merely a result/function of the selection of the subset of the cache block as disclosed in the rejection of claim 1 supra.
Per claim 21, Yeung discloses: wherein the processor core is further configured to allocate a region of memory addresses for which the selective transfer criteria are satisfied, and wherein the circuit board is configured to apply the selective transfer criteria to memory requests for addresses within the allocated region (col. 24 lines 30-35; At 1316, method 1300 generates a system memory access request (e.g., a read) having less than 128-bytes of data. The multicore chip facilitates fetch sizes smaller than a standard DRAM page, which is 128-bytes. Accordingly, the access request can be a single cache block (e.g., 64-bytes), half a cache block (e.g. 32 bytes), or even smaller numbers of data: e.g., 16 bytes, 8 bytes, 4 bytes, 1 byte, etc).
Response to Arguments
Applicant's arguments filed 11/26/25 have been fully considered but they are not persuasive.
The applicant argues: When an incoming request is received, Sankaran identifies a provider identifier and a user identifier associated with the request (See, e.g., Sankaran col. 4, lines 20-32). A set of cache blocks corresponding to the provider identifier is first identified (Id.). A different set of cache blocks corresponding to the user identifier is also identified (Id.). Finally, an intersection set of the provider set 54 and the user set 56 is identified, and entire cache blocks within the intersection set are returned in response to the request (Id.).
Thus, while Sankaran discusses comparing identifiers associated with entire cache blocks to identifiers included with a request for data, Sankaran does not suggest "cause the circuit board to compare respective values of data bits in a cache block of data to selective transfer criteria by transmitting the memory request to the circuit board with the selective transfer criteria; and in response to transmission of the memory request, cause a subset of the cache block of data, having bit values that satisfy the selective transfer criteria, to be returned to the processor core," as recited in amended claim 1 (emphasis added).
In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
The examiner respectfully disagrees and asserts that the combined teachings of Yeung and Sankaran disclose: cause the circuit board to compare respective values of data bits in a cache block of data to selective transfer criteria by transmitting the memory request to the circuit board with the selective transfer criteria; and in response to transmission of the memory request, cause a subset of the cache block of data, having bit values that satisfy the selective transfer criteria, to be returned to the processor core. Firstly, the examiner notes the applicant’s arguments of the references individually. The rejection relies on the combination of Yeung’s teachings modified by Sankaran’s teaching. Yeung us relied upon to teach a read request for data and transmitting the partial block of data. Sankaran is relied upon to teach a request with parameter which, through a comparison step, identifies a specific cache block set to transmit back to the requestor. Yeung’s request modified with Sankaran’s parameters used to compare and select a specific cache block set teaches circuit board to compare respective values of data bits in a cache block of data to selective transfer criteria by transmitting the memory request to the circuit board with the selective transfer criteria; and in response to transmission of the memory request, cause a subset of the cache block of data, having bit values that satisfy the selective transfer criteria, to be returned to the processor core.
Remark
Examiner respectfully requests, in response to this Office action, support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line number(s) in the specification and/or drawing figure(s). This will assist Examiner in prosecuting the application.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BABOUCARR FAAL whose telephone number is (571)270-5073. The examiner can normally be reached M-F 8:30-5:30 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tim VO can be reached on 5712723642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
BABOUCARR . FAAL
Primary Examiner
Art Unit 2138
/BABOUCARR FAAL/Primary Examiner, Art Unit 2138