Prosecution Insights
Last updated: April 18, 2026
Application No. 18/981,038

REDUCE DATA TRAFFIC BETWEEN CACHE AND MEMORY VIA DATA ACCESS OF VARIABLE SIZES

Non-Final OA §102§103§112§DP
Filed
Dec 13, 2024
Examiner
RIGOL, YAIMA
Art Unit
2135
Tech Center
2100 — Computer Architecture & Software
Assignee
Micron Technology, Inc.
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
92%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
464 granted / 619 resolved
+20.0% vs TC avg
Strong +18% interview lift
Without
With
+17.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
18 currently pending
Career history
637
Total Applications
across all art units

Statute-Specific Performance

§101
5.5%
-34.5% vs TC avg
§103
54.0%
+14.0% vs TC avg
§102
9.2%
-30.8% vs TC avg
§112
17.5%
-22.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 619 resolved cases

Office Action

§102 §103 §112 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION The instant application having Application No. 18/981,038 has a total of 20 claims pending in the application; there are 3 independent claims and 17 dependent claims, all of which are ready for examination by the examiner. The specification has not been checked to the extent necessary to determine the presence of all possible minor errors. The specification should be amended to reflect the status of all related applications, whether patented or abandoned. Therefore, applications noted by their serial number and/or attorney docket number should be updated with correct serial number and patent number if patented. In the response to this Office action, the Examiner respectfully requests that support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line numbers in the specification and/or drawing figure(s). This will assist the Examiner in prosecuting this application. Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. INFORMATION CONCERNING DRAWINGS The applicant’s drawings submitted are acceptable for examination purposes. STATUS OF CLAIM FOR PRIORITY IN THE APPLICATION The instant application no. 18981038 filed 12/13/2024 is a Continuation of 17563985, filed 12/28/2021, now U.S. Patent # 12169454. Application No. 17563985 is a Continuation of 16183661, filed 11/07/2018, now U.S. Patent # 11237970. ACKNOWLEDGEMENT OF REFERENCES CITED BY APPLICANT As required by M.P.E.P. 609(C), the applicant’s submission of the Information Disclosure Statement(s) dated 12/18/2024 is/are acknowledged by the examiner and the cited references have been considered in the examination of the claims now pending. As required by M.P.E.P 609 C(2), a copy (copies) of the PTOL-1449(s) initialed and dated by the examiner is/are attached to the instant office action. REJECTIONS NOT BASED ON PRIOR ART Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 2 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. As per claim 2, the limitations “wherein the item section vector comprises a variable number of items” renders the claim indefinite since it is not clear how the number of items in the item selection vector of claim 1 is variable as it appears each item selection vector is of a certain size. For example, figures 4 and 5 of the specification provide “examples of an item selection vector” (par. 0009 of US 20250110881, corresponding to the instant application) where the Specification further explains “[0038] The item selection vector (109) of FIG. 4 can have a variable size that corresponds to the number of items (169) identified in the item selection vector (109)… [0042] The item selection vector (109) of FIG. 5 can have a variable size that corresponds to the number of index pairs (179) identified in the item selection vector (109).” Thus, suggesting different item selection vectors have different sizes or are of variable size with respect to one another but the size of an item selection vector depends on the number of items the particular item selection vector identifies. It is suggested claim 2 be clarified in accordance with Applicant’s Specification. Appropriate correction/clarification is required. REJECTIONS BASED ON PRIOR ART Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b). Note that (MPEP 804.0 (I.B.1)) states: A complete response to a nonstatutory double patenting (NDP) rejection is either a reply by applicant showing that the claims subject to the rejection are patentably distinct from the reference claims or the filing of a terminal disclaimer in accordance with 37 CFR 1.321 in the pending application(s) with a reply to the Office action (see MPEP § 1490 for a discussion of terminal disclaimers). Such a response is required even when the nonstatutory double patenting rejection is provisional. As filing a terminal disclaimer, or filing a showing that the claims subject to the rejection are patentably distinct from the reference application’s claims, is necessary for further consideration of the rejection of the claims, such a filing should not be held in abeyance. Only objections or requirements as to form not necessary for further consideration of the claims may be held in abeyance until allowable subject matter is indicated. Therefore, an application must not be allowed unless the required compliant terminal disclaimer(s) is/are filed and/or the withdrawal of the nonstatutory double patenting rejection(s) is made of record by the examiner. See MPEP § 804.02, subsection VI, for filing terminal disclaimers required to overcome nonstatutory double patenting rejections in applications filed on or after June 8, 1995. Claims 1-20 are rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 1-22 of US 12,169,454 (Corresponding to Application no. 17/563,985). Although the conflicting claims are not identical, they are not patentably distinct from each other because the claims in the co-pending application disclose/obviate the subject matter of the claims in the instant application. Claims of the instant application are compared to claims of the patent in the following table: Instant Application US 12,169,454 (Corresponding to Application no. 17/563,985) 1. A device, comprising: a processor configured to access data using memory addresses in an address space; a first memory configured to store a block of data at a block of contiguous addresses in the address space; and a second memory configured to store an item selection vector that identifies a first portion of the block of data, wherein the processor is configured to access the first portion of the block of data based on the item selection vector. 2. The device of claim 1, wherein the item selection vector comprises a variable number of items. 10. The device of claim 1, wherein the processor is further configured to retrieve the first portion of the block of data from the first memory according to the item selection vector. 11. The device of claim 10, wherein the processor is further configured to retrieve the first portion of the block of data from the first memory without accessing a second portion of the block of data in the first memory. 7. The device of claim 1, wherein the item selection vector identifies each of a plurality of addresses in which the first portion of the block of data is stored. 8. The device of claim 1, wherein the item selection vector identifies a range of addresses in which the first portion of the block of data is stored. 9. The device of claim 1, wherein the item selection vector identifies a range of addresses in which the first portion of the block of data is not stored. 12. The device of claim 1, wherein the processor is further configured to cache, in the second memory, the first portion of the block of data identified by the item selection vector. 13. A method, comprising: storing, in a first memory of a computing system, a block of data at a block of contiguous memory addresses in an address space; and in response to a request to cache the block of data stored in the first memory: accessing, by a processor of the computing device, a first portion of the block of data from the first memory according to an item selection vector stored in a second memory of the computing system. 14. The method of claim 13, wherein the item selection vector comprises a sequence of bits corresponding to a plurality of addresses for the block of data. 15. The method of claim 14, wherein the item selection vector further comprises a count of bits that corresponds to a count of the plurality of addresses for the block of data. 16. The method of claim 13, further comprising caching, in the second memory of the computing system, the first portion of the block of data identified by the item selection vector. 17. The method of claim 13, further comprising retrieving the first portion of the block of data from the first memory according to the item selection vector. 18. The method of claim 17, retrieving the first portion of the block of data from the first memory without accessing a second portion of the block of data in the first memory. 19. A device, comprising: a processor configured to access data using memory addresses in an address space; and a memory configured to store a block of data in the address space; and wherein the processor is configured to access a portion of the block of data based on an item selection vector that identifies multiple addresses representative of where the portion of the block of data is stored in the memory. 20. The device of claim 19, wherein the item selection vector identifies a range of addresses in which the portion of the block of data is stored. 1. A device, comprising: a processor configured to access data using memory addresses in an address space; a first memory configured to store a block of data at a block of contiguous addresses in the address space; and a second memory configured to store an item selection vector that identifies a first portion of the block of data, wherein the processor is configured to access the first portion of the block of data based on the item selection vector. 3. The device of claim 1, wherein the item selection vector is a first item selection vector, and wherein the second memory is configured to store a second item selection vector that identifies a second portion of the block of data. 4. The device of claim 3, wherein the processor is further configured to access the second portion of the block of data based on the second item selection vector. 5. The device of claim 3, wherein the first portion of the block of data occupies a different number of contiguous addresses in the address space than the second portion of the block of data. 6. The device of claim 3, wherein the first item selection vector identifies a different size portion of the block of data than the second item selection vector. 1. A computing device, comprising: a processor configured to access data using memory addresses in an address space; a first memory configured to store a block of data at a block of contiguous addresses in the address space; and a second memory configured to cache, in response to the processor accessing a memory address in the address space and a determination that data stored in the first memory at the memory address is not already cached in the second memory, a first portion of the block of data identified by an item selection vector, wherein the item selection vector comprises a variable number of items. 2. The computing device of claim 1, wherein the item selection vector has a sequence of bits corresponding to a plurality of addresses for the block of data, and a count of bits in the item selection vector corresponds to a count of the plurality of addresses for the block of data; wherein the computing device is configured to communicate the first portion of the block of data from the first memory to the second memory according to the item selection vector; and wherein the first portion of the block of data is smaller than the block of data. 3. The computing device of claim 2, wherein the computing device is configured to communicate the first portion from the first memory to the second memory without communicating a second portion of the block of data in response to the determination. 4. The computing device of claim 3, wherein the second memory is configured to store tag information identifying the block of contiguous addresses among a plurality of blocks of contiguous addresses; and the plurality of blocks of contiguous addresses have a same size; and different blocks in the plurality of blocks are cached in different cache blocks in the second memory. 5. The computing device of claim 4, wherein the different cache blocks in the second memory have different sizes. 6. The computing device of claim 4, wherein the different cache blocks in the second memory have a same size but have different sizes of cached portions of data from the different blocks in the first memory. 7. The computing device of claim 4, wherein each of the different cache blocks stores a separate item selection vector. 8. The computing device of claim 7, wherein item selection vectors of the different cache blocks have different sizes. 9. The computing device of claim 1, wherein the item selection vector has a list of indices identifying the first portion of the block of data. 10. The computing device of claim 1, wherein the variable number of items is identified in the item selection vector. 11. The computing device of claim 1, wherein the item selection vector has a list of index pairs, each identifying a range of the block of contiguous addresses in the address space. See claim 2 above, other addresses other than first portion are identified in the item selection vector See claim 1 above 12. A method, comprising: storing, in a first memory of a computing system, a block of data at a block of contiguous memory addresses in an address space; accessing, by a processor of the computing system, data using memory addresses in the address space; and in response to the processor accessing a memory address in the address space and a determination that data stored in the first memory at the memory address is not already cached in a second memory, communicating a first portion of the block of data from the first memory to the second memory of the computing system according to an item selection vector, wherein the item selection vector has a sequence of bits corresponding to a plurality of addresses for the block of data, and a count of bits in the item selection vector corresponds to a count of the plurality of addresses for the block of data, and wherein the item selection vector comprises a variable number of items; and caching, in the second memory of the computing system, the first portion of the block of data identified by the item selection vector, wherein the first portion of the block of data is smaller than the block of data. See claim 12 above 13. The method of claim 12, wherein in response to the determination, the communicating of the first portion is performed according to the item selection vector without accessing a second portion of the block of data; and the second portion is communicated from the first memory to the second memory when an alternative item selection vector is used to cache the block of data. 14. The method of claim 13, wherein a plurality of blocks of contiguous memory addresses have a same size; and the method further comprises: storing in the second memory tag information identifying the block of contiguous memory addresses among the plurality of blocks of contiguous memory addresses; and caching different blocks in the plurality of blocks of contiguous memory addresses in different cache blocks in the second memory. 15. The method of claim 14, wherein the different cache blocks in the second memory have different data storage capacities. 16. The method of claim 14, wherein the different cache blocks in the second memory have a same size but have different sizes of cached portions of data from the different blocks in the first memory. 17. The method of claim 14, further comprising: storing a separate item selection vector for each of the different cache blocks. 18. The method of claim 12, wherein the communicating of the first portion of the block of data from the first memory to the second memory comprises: transmitting the item selection vector to a controller of the first memory; retrieving the first portion of the block of data from the first memory according to the item selection vector; and transmitting the first portion of the block of data in a batch to the second memory. 19. A non-transitory computer storage medium storing instructions which when executed in a computing system, cause the computing system to perform a method, the method comprising: storing, in a first memory of the computing system, a block of data at a block of contiguous memory addresses in an address space; accessing, by a processor of the computing system, data using memory addresses in the address space; and in response to the processor accessing a memory address in the address space and a determination that data stored in the first memory at the memory address is not already cached in a second memory, communicating a first portion of the block of data from the first memory to the second memory of the computing system according to an item selection vector without accessing a second portion of the block of data, wherein the item selection vector has a sequence of bits corresponding to a plurality of addresses for the block of data, and a count of bits in the item selection vector corresponds to a count of the plurality of addresses for the block of data, wherein the item selection vector comprises a variable number of items; and caching, in the second memory of the computing system, the first portion of the block of data identified by the item selection vector, wherein the first portion of the block of data is smaller than the block of data. 20. The non-transitory computer storage medium of claim 19, wherein the method further comprises: caching data from different blocks of the first memory of a same size in different cache blocks of different sizes in the second memory; storing tag information for the different cache blocks to identify the different blocks in the first memory respectively; and storing different item selection vectors for the different cache blocks respectively. 21. A computing device, comprising: a processor configured to access data using memory addresses in an address space; a first memory configured to store a first block of data at a first block of contiguous addresses in the address space; and a second memory configured to cache a first portion of the first block of data identified by a first item selection vector, wherein the first item selection vector has a first sequence of bits corresponding to a plurality of addresses for the first block of data, and a first count of bits in the first item selection vector corresponds to a first count of the plurality of addresses for the first block of data, wherein: the computing device is configured to communicate the first portion of the first block of data from the first memory to the second memory according to the first item selection vector, in response to a first request to cache the first block of data stored in the first memory, the first memory is further configured to store a second block of data at a second block of continuous addresses in the address space, the second memory is further configured to cache a first portion of the second block of data identified by a second item selection vector, wherein the second item selection vector has a second sequence of bits corresponding to a second plurality of addresses for the second block of data, the computing device is further configured to communicate the first portion of the second block of data from the first memory to the second memory according to the second item selection vector, in response to a second request to cache the second block of data stored in the first memory, and the first item selection vector and the second item selection vector are different sizes, wherein the first item selection vector comprises a different number of items than the second item selection vector. See contiguous addressed in claim 21 above. See claim 11 above 22. A computing device, comprising: a processor configured to access data using memory addresses in an address space; a first memory configured to store a first block of data at a first block of contiguous addresses in the address space; and a second memory configured to cache a first portion of the first block of data identified by a first item selection vector, wherein the first item selection vector has a first sequence of bits corresponding to a plurality of addresses for the first block of data, and a first count of bits in the first item selection vector corresponds to a first count of the plurality of addresses for the first block of data, wherein: the computing device is configured to communicate the first portion of the first block of data from the first memory to the second memory according to the first item selection vector, in response to a first request to cache the first block of data stored in the first memory, the first memory is further configured to store a second block of data at a second block of continuous addresses in the address space, the second memory is further configured to cache a first portion of the second block of data identified by a second item selection vector, wherein the second item selection vector has a second sequence of bits corresponding to a second plurality of addresses for the second block of data, the computing device is further configured to communicate the first portion of the second block of data from the first memory to the second memory according to the second item selection vector, in response to a second request to cache the second block of data stored in the first memory, and see contiguous addresses above the first portion of the first block of data and the first portion of the second block of data are different sizes, wherein the first item selection vector comprises a different number of items than the second item selection vector. Claims 1, 3-20 are rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims of US 11237970 (Corresponding to Application No. 16/183661). Although the conflicting claims are not identical, they are not patentably distinct from each other because the claims in the co-pending application disclose/obviate the subject matter of the claims in the instant application. Claims of the instant application are compared to claims of the patent in the following table: Instant Application US 11237970 (Corresponding to Application No. 16/183661) 1. A device, comprising: a processor configured to access data using memory addresses in an address space; a first memory configured to store a block of data at a block of contiguous addresses in the address space; and a second memory configured to store an item selection vector that identifies a first portion of the block of data, wherein the processor is configured to access the first portion of the block of data based on the item selection vector. 3. The device of claim 1, wherein the item selection vector is a first item selection vector, and wherein the second memory is configured to store a second item selection vector that identifies a second portion of the block of data. 4. The device of claim 3, wherein the processor is further configured to access the second portion of the block of data based on the second item selection vector. 5. The device of claim 3, wherein the first portion of the block of data occupies a different number of contiguous addresses in the address space than the second portion of the block of data. 6. The device of claim 3, wherein the first item selection vector identifies a different size portion of the block of data than the second item selection vector. 7. The device of claim 1, wherein the item selection vector identifies each of a plurality of addresses in which the first portion of the block of data is stored. 8. The device of claim 1, wherein the item selection vector identifies a range of addresses in which the first portion of the block of data is stored. 9. The device of claim 1, wherein the item selection vector identifies a range of addresses in which the first portion of the block of data is not stored. 10. The device of claim 1, wherein the processor is further configured to retrieve the first portion of the block of data from the first memory according to the item selection vector. 11. The device of claim 10, wherein the processor is further configured to retrieve the first portion of the block of data from the first memory without accessing a second portion of the block of data in the first memory. 12. The device of claim 1, wherein the processor is further configured to cache, in the second memory, the first portion of the block of data identified by the item selection vector. 13. A method, comprising: storing, in a first memory of a computing system, a block of data at a block of contiguous memory addresses in an address space; and in response to a request to cache the block of data stored in the first memory: accessing, by a processor of the computing device, a first portion of the block of data from the first memory according to an item selection vector stored in a second memory of the computing system. 14. The method of claim 13, wherein the item selection vector comprises a sequence of bits corresponding to a plurality of addresses for the block of data. 15. The method of claim 14, wherein the item selection vector further comprises a count of bits that corresponds to a count of the plurality of addresses for the block of data. 16. The method of claim 13, further comprising caching, in the second memory of the computing system, the first portion of the block of data identified by the item selection vector. 17. The method of claim 13, further comprising retrieving the first portion of the block of data from the first memory according to the item selection vector. 18. The method of claim 17, retrieving the first portion of the block of data from the first memory without accessing a second portion of the block of data in the first memory. 19. A device, comprising: a processor configured to access data using memory addresses in an address space; and a memory configured to store a block of data in the address space; and wherein the processor is configured to access a portion of the block of data based on an item selection vector that identifies multiple addresses representative of where the portion of the block of data is stored in the memory. 20. The device of claim 19, wherein the item selection vector identifies a range of addresses in which the portion of the block of data is stored. 1. A computing device, comprising: a processor configured to access data using memory addresses in an address space; a first memory configured to store a block of data at a block of contiguous addresses in the space of memory address; and a second memory configured to cache a first portion of the block of data identified by an item selection vector, wherein the item selection vector has a sequence of bits corresponding to a plurality of contiguous addresses for the block of data, and a count of bits in the item selection vector corresponds to a count of the plurality of contiguous addresses for the block of data; wherein the computing device is configured to communicate the first portion of the block of data from the first memory to the second memory according to the item selection vector, in response to a request to cache the block of data stored in the first memory; wherein the computing device communicates the first portion from the first memory to the second memory without communicating a second portion of the block of data in response to the request; wherein the second memory is configured to store tag information identifying the block of contiguous addresses among a plurality of blocks of contiguous addresses. See claim 1 above 2. The computing device of claim 1, wherein the different cache blocks in the second memory have different sizes. 3. The computing device of claim 1, wherein the different cache blocks in the second memory have a same size but have different sizes of cached portions of data from the different blocks in the first memory. 4. The computing device of claim 1, wherein each of the different cache blocks stores a separate item selection vector. 5. The computing device of claim 4, wherein item selection vectors of the different cache blocks have different sizes. 6. The computing device of claim 1, wherein the item selection vector has a list of indices identifying the portion of the first portion of the block of data. 7. The computing device of claim 1, wherein the item selection vector has a list of index pairs, each identifying a range of the block of contiguous addresses in the space of memory address. See claim 1 above See claim 1 above See claim 1 above 8. A method, comprising: storing, in a first memory of a computing system, a block of data at a block of contiguous memory addresses in an address space; accessing, by a processor of the computing system, data using memory addresses in the address space; and in response to a request to cache the block of data stored in the first memory, communicating a first portion of the block of data from the first memory to a second memory of the computing system according to an item selection vector, wherein the item selection vector has a sequence of bits corresponding to a plurality of contiguous addresses for the block of data, and a count of bits in the item selection vector corresponds to a count of the plurality of contiguous addresses for the block of data; and caching, in the second memory of the computing system, the first portion of the block of data identified by the item selection vector; wherein in response to the request, the communicating of the first portion is performed according to the item selection vector without accessing a second portion of the block of data. See claim 8 9. The method of claim 8, wherein the plurality of blocks of contiguous memory addresses have a same size; and the method further comprises: storing in the second memory tag information identifying the block of contiguous memory addresses among a plurality of blocks of contiguous memory addresses; and caching different blocks in the plurality of blocks in different cache blocks in the second memory. 10. The method of claim 9, wherein the different cache blocks in the second memory have different data storage capacities. 11. The method of claim 9, wherein the different cache blocks in the second memory have a same size but have different sizes of cached portions of data from the different blocks in the first memory. 12. The method of claim 9, further comprising: storing a separate item selection vector for each of the different cache blocks. 13. The method of claim 12, wherein item selection vectors of the different cache blocks have different sizes. 14. The method of claim 8, wherein the communicating of the first portion of the block of data from the first memory to the second memory comprises: transmitting the item selection vector to a controller of the first memory; retrieving the first portion of the block of data from the first memory according to the item selection vector; and transmitting the first portion of the block of data in a batch to the second memory. 15. A non-transitory computer storage medium storing instructions which when executed in a computing system, cause the computing system to perform a method, the method comprising: storing, in a first memory of the computing system, a block of data at a block of contiguous memory addresses in an address space; accessing, by a processor of the computing system, data using memory addresses in the address space; and in response to a request to cache the block of data stored in the first memory, communicating a first portion of the block of data from the first memory to a second memory of the computing system according to an item selection vector without accessing a second portion of the block of data, wherein the item selection vector has a sequence of bits corresponding to a plurality of contiguous addresses for the block of data, and a count of bits in the item selection vector corresponds to a count of the plurality of contiguous addresses for the block of data; and caching, in the second memory of the computing system, the first portion of the block of data identified by the item selection vector; wherein in response to the request, the communicating of the first portion is performed according to the item selection vector without accessing the second portion of the block of data; and the second portion is communicated from the first memory to the second memory when an alternative item selection vector is used to cache the block of data. 16. The non-transitory computer storage medium of claim 15, wherein the method further comprises: caching data from different blocks of the first memory of a same size in different cache blocks of different sizes in the second memory; storing tag information for the different cache blocks to identify the different blocks in the first memory respectively; and storing different item selection vectors for the different cache blocks respectively. See contiguous addresses in claim 15 and claim 7 Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-2, 7-11 and 19-20 is/are rejected under 35 U.S.C. 102(a)(1)/(a)(2) as being anticipated by Valentine et al. (US 20140281425). 1. A device, comprising: a processor configured to access data using memory addresses in an address space; [Valentine teaches processor 100 and external memory 110 (fig. 1 and relate text)] a first memory configured to store a block of data at a block of contiguous addresses in the address space; and [Valentine teaches limited range 120 (fig. 1 and related text) where “ the limited range 120 may represent only a portion or subset (e.g., a contiguously indexable portion or subset) of the external memory 110.” (par. 0042)] a second memory configured to store an item selection vector that identifies a first portion of the block of data, [Valentine teaches packed data registers 107 (see figs. 1-2 and 4-5 and related text) which stores memory indices for a memory access request according to “The limited range vector gather operation” (par. 0064), where “Advantageously, this may help to increase the number of such memory indices that may be stored in a single packed data register as source packed memory indices” (par. 0054) packed data operation mask registers 108, 208 (figs. 1-2 and related text) “[0092] FIG. 10 is a block diagram of an example embodiment of a suitable set of packed data operation mask registers 1008. Each of the packed data operation mask registers may be used to store a packed data operation mask.“ where (figs. 4-5 and related text) “A packed data result 515 may be stored in a destination storage location in response to and/or as a result of the masked limited range vector gather instruction/operation. In some embodiments, data may only be gathered if the corresponding mask bit in the packed data operation mask is set to one. Asterisks (*) are shown in positions of the result packed data where the corresponding mask bits are masked out (e.g., in the illustrated example cleared to binary 0)“ (par. 0070)] wherein the processor is configured to access the first portion of the block of data based on the item selection vector [Valentine teaches “ The limited range vector gather operation may load or gather data elements from a limited range 420 of a memory 410. As discussed previously, the limited range may represent only a small subset (e.g., a contiguous subset capable of being indexed by the 8-bit or 16-bit memory indices) of the overall generally much larger memory (e.g., which may be indexed by other instructions using 32-bit or 64-bit memory indices)... each 8-bit byte memory index may be operable to uniquely index or address any of 256 different locations or data elements, and in some embodiments, the limited range may include only those 256 locations or data elements (e.g., 256 bytes or words). The gathered data elements may be indicated by the corresponding memory indices of the source packed memory indices 413. Each memory index may point to a corresponding memory location and/or a data element stored therein. For example, in the illustrated embodiment, the memory index 134 points to the memory location in the limited range that stores data element B1, the memory index 231 points to the memory location in the limited range that stores data element B2, and so on.” (par. 0064; figs. 4-5 and related text) “ A packed data result 515 may be stored in a destination storage location in response to and/or as a result of the masked limited range vector gather instruction/operation. In some embodiments, data may only be gathered if the corresponding mask bit in the packed data operation mask is set to one. Asterisks (*) are shown in positions of the result packed data where the corresponding mask bits are masked out (e.g., in the illustrated example cleared to binary 0). As shown, in some embodiments, the packed data result may be 512-bits wide and may include sixty-four 8-bit byte data elements. Alternatively, 16-bit word or 32-bit doubleword data elements may be gathered and may be stored in either wider or narrower result packed data. “ (par. 0070) thus, accessing portions of limited range according to the mask bits 516]. 2. The device of claim 1, wherein the item selection vector comprises a variable number of items [Valentine teaches “[0052] In some embodiments, the limited range vector memory access instruction 203 may be used to access only the limited range 220 of the memory 210. In some embodiments, the instructions indicate only 8-bit byte or 16-bit word memory indices. Conventional vector gather instructions typically allow the data elements to be gathered from anywhere in memory. As a result, typically either 32-bit or 64-bit memory indices are used. These 32-bit or 64-bit memory indices have enough bits to allow data elements to be potentially gathered from substantially anywhere in memory, or at least from a relatively large amount of memory (e.g., that capable of being addressed by either 32-bits or 64-bits).” Where a variable number of items may be selected (see fig. 5 and related text) “[0092] FIG. 10 is a block diagram of an example embodiment of a suitable set of packed data operation mask registers 1008. Each of the packed data operation mask registers may be used to store a packed data operation mask. In the illustrated embodiment, the set includes eight packed data operation mask registers labeled k0 through k7. Alternate embodiments may include either fewer than eight (e.g., two, four, six, etc.) or more than eight (e.g., sixteen, twenty, thirty-two, etc.) packed data operation mask registers. In the illustrated embodiment, each of the packed data operation mask registers is 64-bits. In alternate embodiments, the widths of the packed data operation mask registers may be either wider than 64-bits (e.g., 80-bits, 128-bits, etc.) or narrower than 64-bits (e.g., 8-bits, 16-bits, 32-bits, etc). By way of example, a masked limited range vector memory access instruction may use three bits (e.g., a 3-bit field) to encode or specify any one of the eight packed data operation mask registers k0 through k7. In alternate embodiments, either fewer or more bits may be used when there are fewer or more packed data operation mask registers, respectively.”]. 7. The device of claim 1, wherein the item selection vector identifies each of a plurality of addresses in which the first portion of the block of data is stored [Valentine teaches “The gathered data elements may be indicated by the corresponding memory indices of the source packed memory indices 413. Each memory index may point to a corresponding memory location and/or a data element stored therein. For example, in the illustrated embodiment, the memory index 134 points to the memory location in the limited range that stores data element B1, the memory index 231 points to the memory location in the limited range that stores data element B2, and so on.” (par. 0064). 8. The device of claim 1, wherein the item selection vector identifies a range of addresses in which the first portion of the block of data is stored [Valentine teaches “[0061] Memory locations, in only a limited range of a memory, may be accessed in response to the limited range vector memory access instruction, at block 332. In some embodiments, the limited range may be accessed with one or more memory addresses of 32-bits or 64-bits each. In some embodiments, the limited range may include only 256 bytes. In some embodiments, as will be explained further below, the access may be performed through multiple data element loads that may load multiple data elements each, including both needed and un-needed data elements. Such multi-element loads may help to improve speed or efficiency in some embodiments. In some embodiments, the entire limited range may be loaded from the memory to storage locations of the processor (e.g., on-die registers).” “[0064]… The extent or size of the limited range may be based on the width in bits of the memory indices. For example, each 8-bit byte memory index may be operable to uniquely index or address any of 256 different locations or data elements, and in some embodiments, the limited range may include only those 256 locations or data elements (e.g., 256 bytes or words). The gathered data elements may be indicated by the corresponding memory indices of the source packed memory indices 413. Each memory index may point to a corresponding memory location and/or a data element stored therein. For example, in the illustrated embodiment, the memory index 134 points to the memory location in the limited range that stores data element B1, the memory index 231 points to the memory location in the limited range that stores data element B2, and so on.”]. 9. The device of claim 1, wherein the item selection vector identifies a range of addresses in which the first portion of the block of data is not stored [[Valentine teaches “[0061] Memory locations, in only a limited range of a memory, may be accessed in response to the limited range vector memory access instruction, at block 332. In some embodiments, the limited range may be accessed with one or more memory addresses of 32-bits or 64-bits each. In some embodiments, the limited range may include only 256 bytes. In some embodiments, as will be explained further below, the access may be performed through multiple data element loads that may load multiple data elements each, including both needed and un-needed data elements. Such multi-element loads may help to improve speed or efficiency in some embodiments. In some embodiments, the entire limited range may be loaded from the memory to storage locations of the processor (e.g., on-die registers).” “[0064]… The extent or size of the limited range may be based on the width in bits of the memory indices. For example, each 8-bit byte memory index may be operable to uniquely index or address any of 256 different locations or data elements, and in some embodiments, the limited range may include only those 256 locations or data elements (e.g., 256 bytes or words). The gathered data elements may be indicated by the corresponding memory indices of the source packed memory indices 413. Each memory index may point to a corresponding memory location and/or a data element stored therein. For example, in the illustrated embodiment, the memory index 134 points to the memory location in the limited range that stores data element B1, the memory index 231 points to the memory location in the limited range that stores data element B2, and so on.”. According to fig. 5 and related text, “ A packed data result 515 may be stored in a destination storage location in response to and/or as a result of the masked limited range vector gather instruction/operation. In some embodiments, data may only be gathered if the corresponding mask bit in the packed data operation mask is set to one. Asterisks (*) are shown in positions of the result packed data where the corresponding mask bits are masked out (e.g., in the illustrated example cleared to binary 0). As shown, in some embodiments, the packed data result may be 512-bits wide and may include sixty-four 8-bit byte data elements. Alternatively, 16-bit word or 32-bit doubleword data elements may be gathered and may be stored in either wider or narrower result packed data. “ (par. 0070) thus, accessing portions of limited range according to the mask bits 516, which indicate addresses where the data blocks B1-B64 to access are not stores or contain an asterisk]. 10. The device of claim 1, wherein the processor is further configured to retrieve the first portion of the block of data from the first memory according to the item selection vector [Valentine teaches “ The limited range vector gather operation may load or gather data elements from a limited range 420 of a memory 410. As discussed previously, the limited range may represent only a small subset (e.g., a contiguous subset capable of being indexed by the 8-bit or 16-bit memory indices) of the overall generally much larger memory (e.g., which may be indexed by other instructions using 32-bit or 64-bit memory indices)... each 8-bit byte memory index may be operable to uniquely index or address any of 256 different locations or data elements, and in some embodiments, the limited range may include only those 256 locations or data elements (e.g., 256 bytes or words). The gathered data elements may be indicated by the corresponding memory indices of the source packed memory indices 413. Each memory index may point to a corresponding memory location and/or a data element stored therein. For example, in the illustrated embodiment, the memory index 134 points to the memory location in the limited range that stores data element B1, the memory index 231 points to the memory location in the limited range that stores data element B2, and so on.” (par. 0064; figs. 4-5 and related text) “ A packed data result 515 may be stored in a destination storage location in response to and/or as a result of the masked limited range vector gather instruction/operation. In some embodiments, data may only be gathered if the corresponding mask bit in the packed data operation mask is set to one. Asterisks (*) are shown in positions of the result packed data where the corresponding mask bits are masked out (e.g., in the illustrated example cleared to binary 0). As shown, in some embodiments, the packed data result may be 512-bits wide and may include sixty-four 8-bit byte data elements. Alternatively, 16-bit word or 32-bit doubleword data elements may be gathered and may be stored in either wider or narrower result packed data. “ (par. 0070) thus, accessing portions of limited range according to the mask bits 516]. 11. The device of claim 10, wherein the processor is further configured to retrieve the first portion of the block of data from the first memory without accessing a second portion of the block of data in the first memory [Valentine teaches “ The limited range vector gather operation may load or gather data elements from a limited range 420 of a memory 410. As discussed previously, the limited range may represent only a small subset (e.g., a contiguous subset capable of being indexed by the 8-bit or 16-bit memory indices) of the overall generally much larger memory (e.g., which may be indexed by other instructions using 32-bit or 64-bit memory indices)... each 8-bit byte memory index may be operable to uniquely index or address any of 256 different locations or data elements, and in some embodiments, the limited range may include only those 256 locations or data elements (e.g., 256 bytes or words). The gathered data elements may be indicated by the corresponding memory indices of the source packed memory indices 413. Each memory index may point to a corresponding memory location and/or a data element stored therein. For example, in the illustrated embodiment, the memory index 134 points to the memory location in the limited range that stores data element B1, the memory index 231 points to the memory location in the limited range that stores data element B2, and so on.” (par. 0064; figs. 4-5 and related text) “ A packed data result 515 may be stored in a destination storage location in response to and/or as a result of the masked limited range vector gather instruction/operation. In some embodiments, data may only be gathered if the corresponding mask bit in the packed data operation mask is set to one. Asterisks (*) are shown in positions of the result packed data where the corresponding mask bits are masked out (e.g., in the illustrated example cleared to binary 0). As shown, in some embodiments, the packed data result may be 512-bits wide and may include sixty-four 8-bit byte data elements. Alternatively, 16-bit word or 32-bit doubleword data elements may be gathered and may be stored in either wider or narrower result packed data. “ (par. 0070) thus, accessing portions of limited range according to the mask bits 516, where portions B1-B64 indicated with an asterisk are not accessed]. 19. A device, comprising: a processor configured to access data using memory addresses in an address space; and [Valentine teaches processor 100 and external memory 110 (fig. 1 and relate text)] a memory configured to store a block of data in the address space; and [Valentine teaches limited range 120 (fig. 1 and related text) where “ the limited range 120 may represent only a portion or subset (e.g., a contiguously indexable portion or subset) of the external memory 110.” (par. 0042)] wherein the processor is configured to access a portion of the block of data based on an item selection vector that identifies multiple addresses representative of where the portion of the block of data is stored in the memory [Valentine teaches “ The limited range vector gather operation may load or gather data elements from a limited range 420 of a memory 410. As discussed previously, the limited range may represent only a small subset (e.g., a contiguous subset capable of being indexed by the 8-bit or 16-bit memory indices) of the overall generally much larger memory (e.g., which may be indexed by other instructions using 32-bit or 64-bit memory indices)... each 8-bit byte memory index may be operable to uniquely index or address any of 256 different locations or data elements, and in some embodiments, the limited range may include only those 256 locations or data elements (e.g., 256 bytes or words). The gathered data elements may be indicated by the corresponding memory indices of the source packed memory indices 413. Each memory index may point to a corresponding memory location and/or a data element stored therein (thus, identifying multiple addresses/locations). For example, in the illustrated embodiment, the memory index 134 points to the memory location in the limited range that stores data element B1, the memory index 231 points to the memory location in the limited range that stores data element B2, and so on.” (par. 0064; figs. 4-5 and related text) “ A packed data result 515 may be stored in a destination storage location in response to and/or as a result of the masked limited range vector gather instruction/operation. In some embodiments, data may only be gathered if the corresponding mask bit in the packed data operation mask is set to one. Asterisks (*) are shown in positions of the result packed data where the corresponding mask bits are masked out (e.g., in the illustrated example cleared to binary 0). As shown, in some embodiments, the packed data result may be 512-bits wide and may include sixty-four 8-bit byte data elements. Alternatively, 16-bit word or 32-bit doubleword data elements may be gathered and may be stored in either wider or narrower result packed data. “ (par. 0070) thus, accessing portions of limited range according to the mask bits 516]. 20. The device of claim 19, wherein the item selection vector identifies a range of addresses in which the portion of the block of data is stored [Valentine teaches “[0061] Memory locations, in only a limited range of a memory, may be accessed in response to the limited range vector memory access instruction, at block 332. In some embodiments, the limited range may be accessed with one or more memory addresses of 32-bits or 64-bits each. In some embodiments, the limited range may include only 256 bytes. In some embodiments, as will be explained further below, the access may be performed through multiple data element loads that may load multiple data elements each, including both needed and un-needed data elements. Such multi-element loads may help to improve speed or efficiency in some embodiments. In some embodiments, the entire limited range may be loaded from the memory to storage locations of the processor (e.g., on-die registers).” “[0064]… The extent or size of the limited range may be based on the width in bits of the memory indices. For example, each 8-bit byte memory index may be operable to uniquely index or address any of 256 different locations or data elements, and in some embodiments, the limited range may include only those 256 locations or data elements (e.g., 256 bytes or words). The gathered data elements may be indicated by the corresponding memory indices of the source packed memory indices 413. Each memory index may point to a corresponding memory location and/or a data element stored therein. For example, in the illustrated embodiment, the memory index 134 points to the memory location in the limited range that stores data element B1, the memory index 231 points to the memory location in the limited range that stores data element B2, and so on.”]. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 3-6 are rejected under 35 U.S.C. 103 as being unpatentable over Valentine et al. (US 20140281425). 3. The device of claim 1, wherein the item selection vector is a first item selection vector, and wherein the second memory is configured to store a second item selection vector that identifies a second portion of the block of data [Valentine teaches each of the access vectors including a list of indices for an access operations (figs. 3-5 and related text) “Advantageously, this may help to increase the number of such memory indices that may be stored in a single packed data register as source packed memory indices” (par. 0054) “[0069] As shown, in the case of the source packed memory indices 513 being 512-bits wide, and having sixty four 8-bit memory indices, the source packed data operation mask 516 may be 64-bits wide with each bit representing a predicate or mask bit. Alternatively, the source packed data operation mask may have other widths, for example, a width in bits equal to the number of memory indices in the source packed memory indices 513 (e.g., eight, sixteen, thirty two, etc.). In the illustrated example, the mask bits, from least significant (on the left) to most significant (on the right), are 1, 1, 0, 1, 1, 1, 0, . . . 1. This is just one example. According to the illustrated convention, a mask bit value of binary 0 represents a masked out element, whereas a mask bit value of binary 1 indicates an unmasked element. For each unmasked element, the associated gather operation is to be performed and the gathered data element is to be stored in the corresponding data element of the packed data result 515. Each mask bit corresponds to a memory index and result data element in a corresponding position. For example, in the illustration the corresponding positions are in vertically alignment one above the other.”], where even though Valentine does not expressly refer to a second item selection vector it would have been obvious to one of ordinary skill in the art to have a different vector indicating access information for different memory locations within limited range (see figs. 3-5 and related text) since doing so would facilitate handling of different memory access operations. 4. The device of claim 3, wherein the processor is further configured to access the second portion of the block of data based on the second item selection vector [Valentine teaches accessing locations in limited range in according to access instructions (figs. 3-5 and related text) where it would have been obvious to one of ordinary skill in the art to have a different vector indicating access information for different memory locations within limited range (see figs. 3-5 and related text) since doing so would facilitate handling of different memory access operations]. 5. The device of claim 3, wherein the first portion of the block of data occupies a different number of contiguous addresses in the address space than the second portion of the block of data [Valentine teaches “[0061] Memory locations, in only a limited range of a memory, may be accessed in response to the limited range vector memory access instruction, at block 332. In some embodiments, the limited range may be accessed with one or more memory addresses of 32-bits or 64-bits each. In some embodiments, the limited range may include only 256 bytes. In some embodiments, as will be explained further below, the access may be performed through multiple data element loads that may load multiple data elements each, including both needed and un-needed data elements. Such multi-element loads may help to improve speed or efficiency in some embodiments. In some embodiments, the entire limited range may be loaded from the memory to storage locations of the processor (e.g., on-die registers).” “[0064]… The extent or size of the limited range may be based on the width in bits of the memory indices. For example, each 8-bit byte memory index may be operable to uniquely index or address any of 256 different locations or data elements, and in some embodiments, the limited range may include only those 256 locations or data elements (e.g., 256 bytes or words). The gathered data elements may be indicated by the corresponding memory indices of the source packed memory indices 413. Each memory index may point to a corresponding memory location and/or a data element stored therein. For example, in the illustrated embodiment, the memory index 134 points to the memory location in the limited range that stores data element B1, the memory index 231 points to the memory location in the limited range that stores data element B2, and so on.” Where different number of contiguous addresses may be accessed by each memory access operation as shown in fig. 5, where it would have been obvious to one of ordinary skill in the art to have a different vector indicating access information for different memory locations within limited range (see figs. 3-5 and related text) since doing so would facilitate handling of different memory access operations]. 6. The device of claim 3, wherein the first item selection vector identifies a different size portion of the block of data than the second item selection vector [Valentine teaches “[0061] Memory locations, in only a limited range of a memory, may be accessed in response to the limited range vector memory access instruction, at block 332. In some embodiments, the limited range may be accessed with one or more memory addresses of 32-bits or 64-bits each. In some embodiments, the limited range may include only 256 bytes. In some embodiments, as will be explained further below, the access may be performed through multiple data element loads that may load multiple data elements each, including both needed and un-needed data elements. Such multi-element loads may help to improve speed or efficiency in some embodiments. In some embodiments, the entire limited range may be loaded from the memory to storage locations of the processor (e.g., on-die registers).” “[0064]… The extent or size of the limited range may be based on the width in bits of the memory indices. For example, each 8-bit byte memory index may be operable to uniquely index or address any of 256 different locations or data elements, and in some embodiments, the limited range may include only those 256 locations or data elements (e.g., 256 bytes or words). The gathered data elements may be indicated by the corresponding memory indices of the source packed memory indices 413. Each memory index may point to a corresponding memory location and/or a data element stored therein.” Where each operation may access a different portion of limited range as shown in fig. 5 and related text, for example by accessing different locations of B1-B64. Note it would have been obvious to one of ordinary skill in the art to have a different vector indicating access information for different memory locations within limited range (see figs. 3-5 and related text) since doing so would facilitate handling of different memory access operations]. Claims 12-18 are rejected under 35 U.S.C. 103 as being unpatentable over Valentine et al. (US 20140281425) in view of Carter’s “Impulse: Building a Smarter Memory Controller.”. 12. Valentine teaches The device of claim 1, but does not expressly disclose wherein the processor is further configured to cache, in the second memory, the first portion of the block of data identified by the item selection vector; however, caching of accessed memory portions is known in the arts, as evidenced by Carter, [Carter's "Impulse: Building a Smarter Memory Controller" discusses disadvantages associated with conventional memory systems which are capable of only loading, storing, or transferring data in units of entire cache lines between a system main memory and a cache memory. In such systems, if any data in a physical cache line is needed, the entire physical cache line must be transferred. Hence, it can be seen that when the smallest unit of transfer is large, e.g. an entire cache line, and data transfers may include a large amount of unneeded data [P1-2][Fig. 1]. Such inflexibility is disadvantageous because transferring unneeded data increases bus traffic and makes inefficient use of the cache [P2, C1]. It would be preferable to only transfer data that is needed, e.g. the diagonal elements. In order to address these problems, Carter discloses a mechanism to load only the specific portions of each cache line of each physical page that are required from the system memory to the cache memory. An intervening controller accesses the necessary portion(s) of each physical cache line and provides those portions to the cache as a single virtual cache line [Fig 1]. Thus, each cache line transferred to the cache memory is no longer limited to the contents of a single physical cache line, but may include elements from a plurality of physical cache lines.]. Valentine and Carter are analogous art because they are from the same field of endeavor of memory access and control. Before the effective filing date of the claimed inventions, it would have been obvious to a person of ordinary skill in the art to modify the system/method of Valentine which includes a selection vector to select certain portions of a limited range and cache these portions in the manner taught by Carter where only certain cache line portions are transferred to cache memory since doing so would provide the benefits of reducing bus traffic by avoiding of loading unnecessary data in the cache. Therefore, it would have been obvious to combine Valentine and Carter for the benefit of creating a storage system/method to obtain the invention as specified in claim 12. 13. A method, comprising: storing, in a first memory of a computing system, a block of data at a block of contiguous memory addresses in an address space; and [Valentine teaches [Valentine teaches processor 100 and external memory 110 (fig. 1 and relate text)] … accessing, by a processor of the computing device, a first portion of the block of data from the first memory according to an item selection vector stored in a second memory of the computing system [Valentine teaches “The limited range vector gather operation may load or gather data elements from a limited range 420 of a memory 410. As discussed previously, the limited range may represent only a small subset (e.g., a contiguous subset capable of being indexed by the 8-bit or 16-bit memory indices) of the overall generally much larger memory (e.g., which may be indexed by other instructions using 32-bit or 64-bit memory indices)... each 8-bit byte memory index may be operable to uniquely index or address any of 256 different locations or data elements, and in some embodiments, the limited range may include only those 256 locations or data elements (e.g., 256 bytes or words). The gathered data elements may be indicated by the corresponding memory indices of the source packed memory indices 413. Each memory index may point to a corresponding memory location and/or a data element stored therein. For example, in the illustrated embodiment, the memory index 134 points to the memory location in the limited range that stores data element B1, the memory index 231 points to the memory location in the limited range that stores data element B2, and so on.” (par. 0064; figs. 4-5 and related text) “ A packed data result 515 may be stored in a destination storage location in response to and/or as a result of the masked limited range vector gather instruction/operation. In some embodiments, data may only be gathered if the corresponding mask bit in the packed data operation mask is set to one. Asterisks (*) are shown in positions of the result packed data where the corresponding mask bits are masked out (e.g., in the illustrated example cleared to binary 0). As shown, in some embodiments, the packed data result may be 512-bits wide and may include sixty-four 8-bit byte data elements. Alternatively, 16-bit word or 32-bit doubleword data elements may be gathered and may be stored in either wider or narrower result packed data. “ (par. 0070) thus, accessing portions of limited range according to the mask bits 516]. Valentine does not expressly the accessing in response to a request to cache the block of data stored in the first memory:; however, regarding these limitations, Carter teaches [Carter's "Impulse: Building a Smarter Memory Controller" discusses disadvantages associated with conventional memory systems which are capable of only loading, storing, or transferring data in units of entire cache lines between a system main memory and a cache memory. In such systems, if any data in a physical cache line is needed, the entire physical cache line must be transferred. Hence, it can be seen that when the smallest unit of transfer is large, e.g. an entire cache line, and data transfers may include a large amount of unneeded data [P1-2][Fig. 1]. Such inflexibility is disadvantageous because transferring unneeded data increases bus traffic and makes inefficient use of the cache [P2, C1]. It would be preferable to only transfer data that is needed, e.g. the diagonal elements. In order to address these problems, Carter discloses a mechanism to load only the specific portions of each cache line of each physical page that are required from the system memory to the cache memory. An intervening controller accesses the necessary portion(s) of each physical cache line and provides those portions to the cache as a single virtual cache line [Fig 1]. Thus, each cache line transferred to the cache memory is no longer limited to the contents of a single physical cache line, but may include elements from a plurality of physical cache lines]. Valentine and Carter are analogous art because they are from the same field of endeavor of memory access and control. Before the effective filing date of the claimed inventions, it would have been obvious to a person of ordinary skill in the art to modify the system/method of Valentine which includes a selection vector to select certain portions of a limited range and cache these portions in the manner taught by Carter where only certain cache line portions are transferred to cache memory since doing so would provide the benefits of reducing bus traffic by avoiding of loading unnecessary data in the cache. Therefore, it would have been obvious to combine Valentine and Carter for the benefit of creating a storage system/method to obtain the invention as specified in claim 13. 14. The method of claim 13, wherein the item selection vector comprises a sequence of bits corresponding to a plurality of addresses for the block of data [Valentine teaches “The limited range vector gather operation may load or gather data elements from a limited range 420 of a memory 410. As discussed previously, the limited range may represent only a small subset (e.g., a contiguous subset capable of being indexed by the 8-bit or 16-bit memory indices) of the overall generally much larger memory (e.g., which may be indexed by other instructions using 32-bit or 64-bit memory indices)... each 8-bit byte memory index may be operable to uniquely index or address any of 256 different locations or data elements, and in some embodiments, the limited range may include only those 256 locations or data elements (e.g., 256 bytes or words). The gathered data elements may be indicated by the corresponding memory indices of the source packed memory indices 413. Each memory index may point to a corresponding memory location and/or a data element stored therein. For example, in the illustrated embodiment, the memory index 134 points to the memory location in the limited range that stores data element B1, the memory index 231 points to the memory location in the limited range that stores data element B2, and so on.” (par. 0064; figs. 4-5 and related text) “ A packed data result 515 may be stored in a destination storage location in response to and/or as a result of the masked limited range vector gather instruction/operation. In some embodiments, data may only be gathered if the corresponding mask bit in the packed data operation mask is set to one. Asterisks (*) are shown in positions of the result packed data where the corresponding mask bits are masked out (e.g., in the illustrated example cleared to binary 0). As shown, in some embodiments, the packed data result may be 512-bits wide and may include sixty-four 8-bit byte data elements. Alternatively, 16-bit word or 32-bit doubleword data elements may be gathered and may be stored in either wider or narrower result packed data. “ (par. 0070) thus, accessing portions of limited range according to the mask bits 516]. 15. The method of claim 14, wherein the item selection vector further comprises a count of bits that corresponds to a count of the plurality of addresses for the block of data [Valentine teaches (1) Contents of register 513: Bits 0-511 of register 513 identify items corresponding to addresses of data items in the limited range 520 [Figs. 4-5], and each 8-bit section corresponds to an address to a block of data within the range and identifies items. The count of bits (512) in register 513 corresponds to the count of addresses/indices potentially being identified for transfer in an 8:1 correspondence ratio, as there are 512 bits per selection vector, organized into 64 8-bit indices which each represent an address of a data item in the limited range. (2) Contents of register 516 identify which of the corresponding items identified by 513 are to be transferred. Register 516 contains 64 bits which correspond in a 1:1 relationship with the count of indices of 513 which each represent an address of a data item in the limited range [Fig. 5].]. 16. The method of claim 13, further comprising caching, in the second memory of the computing system, the first portion of the block of data identified by the item selection vector [Carter's "Impulse: Building a Smarter Memory Controller" discusses disadvantages associated with conventional memory systems which are capable of only loading, storing, or transferring data in units of entire cache lines between a system main memory and a cache memory. In such systems, if any data in a physical cache line is needed, the entire physical cache line must be transferred. Hence, it can be seen that when the smallest unit of transfer is large, e.g. an entire cache line, and data transfers may include a large amount of unneeded data [P1-2][Fig. 1]. Such inflexibility is disadvantageous because transferring unneeded data increases bus traffic and makes inefficient use of the cache [P2, C1]. It would be preferable to only transfer data that is needed, e.g. the diagonal elements. In order to address these problems, Carter discloses a mechanism to load only the specific portions of each cache line of each physical page that are required from the system memory to the cache memory. An intervening controller accesses the necessary portion(s) of each physical cache line and provides those portions to the cache as a single virtual cache line [Fig 1]. Thus, each cache line transferred to the cache memory is no longer limited to the contents of a single physical cache line, but may include elements from a plurality of physical cache lines]. 17. The method of claim 13, further comprising retrieving the first portion of the block of data from the first memory according to the item selection vector [Valentine teaches “ The limited range vector gather operation may load or gather data elements from a limited range 420 of a memory 410. As discussed previously, the limited range may represent only a small subset (e.g., a contiguous subset capable of being indexed by the 8-bit or 16-bit memory indices) of the overall generally much larger memory (e.g., which may be indexed by other instructions using 32-bit or 64-bit memory indices)... each 8-bit byte memory index may be operable to uniquely index or address any of 256 different locations or data elements, and in some embodiments, the limited range may include only those 256 locations or data elements (e.g., 256 bytes or words). The gathered data elements may be indicated by the corresponding memory indices of the source packed memory indices 413. Each memory index may point to a corresponding memory location and/or a data element stored therein. For example, in the illustrated embodiment, the memory index 134 points to the memory location in the limited range that stores data element B1, the memory index 231 points to the memory location in the limited range that stores data element B2, and so on.” (par. 0064; figs. 4-5 and related text) “ A packed data result 515 may be stored in a destination storage location in response to and/or as a result of the masked limited range vector gather instruction/operation. In some embodiments, data may only be gathered if the corresponding mask bit in the packed data operation mask is set to one. Asterisks (*) are shown in positions of the result packed data where the corresponding mask bits are masked out (e.g., in the illustrated example cleared to binary 0). As shown, in some embodiments, the packed data result may be 512-bits wide and may include sixty-four 8-bit byte data elements. Alternatively, 16-bit word or 32-bit doubleword data elements may be gathered and may be stored in either wider or narrower result packed data. “ (par. 0070) thus, accessing portions of limited range according to the mask bits 516]. 18. The method of claim 17, retrieving the first portion of the block of data from the first memory without accessing a second portion of the block of data in the first memory [Valentine teaches “ The limited range vector gather operation may load or gather data elements from a limited range 420 of a memory 410. As discussed previously, the limited range may represent only a small subset (e.g., a contiguous subset capable of being indexed by the 8-bit or 16-bit memory indices) of the overall generally much larger memory (e.g., which may be indexed by other instructions using 32-bit or 64-bit memory indices)... each 8-bit byte memory index may be operable to uniquely index or address any of 256 different locations or data elements, and in some embodiments, the limited range may include only those 256 locations or data elements (e.g., 256 bytes or words). The gathered data elements may be indicated by the corresponding memory indices of the source packed memory indices 413. Each memory index may point to a corresponding memory location and/or a data element stored therein. For example, in the illustrated embodiment, the memory index 134 points to the memory location in the limited range that stores data element B1, the memory index 231 points to the memory location in the limited range that stores data element B2, and so on.” (par. 0064; figs. 4-5 and related text) “ A packed data result 515 may be stored in a destination storage location in response to and/or as a result of the masked limited range vector gather instruction/operation. In some embodiments, data may only be gathered if the corresponding mask bit in the packed data operation mask is set to one. Asterisks (*) are shown in positions of the result packed data where the corresponding mask bits are masked out (e.g., in the illustrated example cleared to binary 0). As shown, in some embodiments, the packed data result may be 512-bits wide and may include sixty-four 8-bit byte data elements. Alternatively, 16-bit word or 32-bit doubleword data elements may be gathered and may be stored in either wider or narrower result packed data. “ (par. 0070) thus, accessing portions of limited range according to the mask bits 516, where portions B1-B64 indicated with an asterisk are not accessed]. RELEVANT ART CITED BY THE EXAMINER The following prior art made of record and not relied upon is cited to establish the level of skill in the applicant’s art and those arts considered reasonably pertinent to applicant’s disclosure. See MPEP 707.05(c). Hagiwara (US 2009/0228657) teaches “An apparatus includes a vector unit to process a vector data, a cache memory which includes a plurality of cache lines to store a plurality of divisional data being sent from a main memory, each of the divisional data of vector data having been divided according to a capacity of a cache line, and a cache controller to send all of the divisional data as the vector data to the vector unit after the cache lines have stored all of the divisional data including the vector data.” (Abstract). De la Iglesia et al. (US 8,402,198) teaches “A hardware search structure quickly determines the status of cache lines associated with a large disk array and at the same time reduces the amount of memory space needed for tracking the status. The search structure is configurable in hardware to different cache line sizes and different primary and secondary index sizes. A maintenance feature invalidates state record entries based both on their time stamps and on associated usage statistics.” (Abstract). “To search for an arbitrary block within search structure 220, storage address 210 is split into three components: primary index address 212, secondary index address 214, and cache line offset 216. The size of these address sections is a function of the cache line size and cache line configuration. One advantage of the search structure 220 is the ability to dynamically adjust the sizes of primary index address 212, secondary index address 214, and cache line offset 216 based on statistics gathered within the state records 260 (FIG. 3). Although not used for the search operation, cache line offset 216 may be used to select a subset of the cache line data for return to Initiators 300 in response to a read operation. In practice, this value is the offset within the cache line at which the requested data starts. The offset range is, by necessity, smaller than the cache line size.” (col. 4, lines 30-45). CLOSING COMMENTS a. STATUS OF CLAIMS IN THE APPLICATION a(1) CLAIMS REJECTED IN THE APPLICATION Per the instant office action, claims 1-20 have received a first action on the merits and are subject of a first action non-final. b. DIRECTION OF FUTURE CORRESPONDENCES Any inquiry concerning this communication or earlier communications from the examiner should be directed to YAIMA RIGOL whose telephone number is (571)272-1232. The examiner can normally be reached Monday-Friday 9:00AM-5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jared I. Rutz can be reached on (571) 272-5535. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. April 7, 2026 /YAIMA RIGOL/ Primary Examiner, Art Unit 2135
Read full office action

Prosecution Timeline

Dec 13, 2024
Application Filed
Apr 07, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591522
COMPUTER-READABLE RECORDING MEDIUM HAVING STORED THEREIN MEMORY ACCESS CONTROL PROGRAM, MEMORY ACCESS CONTROL METHOD, AND INFORMATION PROCESSING APPARATUS
2y 5m to grant Granted Mar 31, 2026
Patent 12585581
MEMORY MODULE HAVING VOLATILE AND NON-VOLATILE MEMORY SUBSYSTEMS AND METHOD OF OPERATION
2y 5m to grant Granted Mar 24, 2026
Patent 12579073
APPARATUS AND METHOD FOR INTELLIGENT MEMORY PAGE MANAGEMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12578899
MEMORY DEVICE, MEMORY SYSTEM, MEMORY CONTROLLER, AND OPERATION METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12566716
SYSTEMS AND METHODS FOR TIMESTEP SHARED MEMORY MULTIPROCESSING BASED ON TRACKING TABLE MECHANISMS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
92%
With Interview (+17.5%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 619 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month