Prosecution Insights
Last updated: April 19, 2026
Application No. 18/759,068

APPARATUSES AND METHODS FOR COMPUTE ENABLED CACHE

Non-Final OA §DP
Filed
Jun 28, 2024
Examiner
BUI, THA-O H
Art Unit
2825
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Lodestar Licensing Group LLC
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
92%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
849 granted / 965 resolved
+20.0% vs TC avg
Minimal +4% lift
Without
With
+4.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
28 currently pending
Career history
993
Total Applications
across all art units

Statute-Specific Performance

§101
1.3%
-38.7% vs TC avg
§103
41.9%
+1.9% vs TC avg
§102
34.0%
-6.0% vs TC avg
§112
10.7%
-29.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 965 resolved cases

Office Action

§DP
DETAILED ACTION Notice of AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are pending in the application. Information Disclosure Statement The information Disclosure Statement (IDS) Form PTO-1449, filed 06/28/2024, 08/28/2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosed therein was considered by the examiner. Drawings The drawings submitted on 06/28/2024. These drawings are review and accepted by the examiner. Specification Applicant is reminded of the proper language and format for an abstract of the disclosure. The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details. The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided. The abstract of the disclosure is objected to because it uses the phrase “disclosure” and “comprise” in page 1, in lines 1-2, which is implied. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b). Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO internet Web site contains terminal disclaimer forms which may be used. Please visit http://www.uspto.gov/forms/. The filing date of the application will determine what form should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 1-20 are reject on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 1-42 of U.S Patent No. 10,073,786 B2 (‘786). Although the conflicting claims are not identical, they are not patentably distinct from each other because the instant application claims are obvious variants of the ‘786 claims. US Patent No. 10,073,786 B2 US Patent Application No. 2024/0354254 A1 1. An apparatus, comprising: a memory configured to store cache data and having: a memory array; a sensing circuitry comprising a plurality of sense amplifiers and a plurality of compute components to perform logical operations; and a memory controller coupled to the memory array, the memory controller configured to: create a block select as metadata to a cache line to control alignment of cache blocks within the memory array; and create a subrow select as metadata to the cache line to control placement of the cache line on a particular row in the memory array to align the cache line to one or more of the plurality of compute components. 2. The apparatus of claim 1, wherein the memory is a last layer cache (LLC) memory. 3. The apparatus of claim 2, wherein the plurality of compute components are configured to access and to operate on cached data in the LLC memory without moving the cached data to a higher level in the memory. 4. The apparatus of claim 1, wherein the plurality of compute components are a static random access memory (SRAM) logic resource integrated with the memory array of the memory. 5. The apparatus of claim 1, wherein: the block select enables an offset to the cache line; and the subrow select enables multiple sets to a set associative cache. 6. The apparatus of claim 1, wherein the block select provides an offset to a page in a dynamic random access memory (DRAM) page. 7. The apparatus of claim 1, wherein the block select and the subrow select are not part of an address space of a host processor. 8. The apparatus of claim 7, wherein the memory controller is configured to: change the block select and the subrow select; and relocate the cached data transparently to the host processor. 9. The apparatus of claim 8, wherein the memory controller is configured to store a copy of the block select and the subrow select internal to the host processor. 10. The apparatus of claim 1, wherein the block select controls alignment of cache blocks within the memory array. 11. The apparatus of claim 10, wherein the memory array is a static random access memory (SRAM) array on a logic resource of a host. 12. The apparatus of claim 10, wherein the memory array is a dynamic random access memory (DRAM) array on a processing in memory (PIM) device. 13. The apparatus of claim 1, wherein the subrow select controls placement of the cache line on multiple different rows in the memory array in the memory. 14. The apparatus of claim 13, wherein: the subrow select is four (4) bits to allow a selection of 1 of 16 rows of the memory array for a given cache block; and the subrow select is added to a portion of a tag in the cache line. 15. The apparatus of claim 1, wherein the memory controller is configured to: use the block select to control alignment of cached data in the memory array in the memory; and use the subrow select to control resource allocation in the memory array. 16. The apparatus of claim 15, wherein the cache memory controller is configured to: use the block select to choose which part of a row in the memory array having a first bit width to access as a chunk having a second bit width as part of an array access request; use the block select and the subrow select in a repeated manner to allow a cache line to be split and placed differently in a bank of dynamic random access memory (DRAM). 17. The apparatus of claim 16, wherein: the memory array is in a processing in memory (PIM) based device; at least a portion of the cache line is handled as a vector having four (4) third bit length bit values; and a plurality of different cache blocks are placed in a plurality of different banks in a dynamic random access memory (DRAM) to allow multiple mapped single cache lines. 18. The apparatus of claim 17, wherein the memory controller is configured to manage a first bit length cache line on a second bit length interface such that four (4) second bit length chunks can each have a different alignment. 19. An apparatus, comprising: a memory device configured to couple to a host, wherein the memory device includes: an array of memory cells; sensing circuitry coupled to the array, the sensing circuitry including a plurality of sense amplifiers and a plurality of compute components configured to perform logical operations; and a memory controller coupled to the array and sensing circuitry, the memory controller configured to: receive a cache line having block select and subrow select metadata; and operate on the block select and subrow select metadata to: control alignment of cache blocks in the array; and allow the cache line to be placed on a particular row of the array to align the cache line to one or more of the plurality of compute components. 20. The apparatus of claim 19, wherein the apparatus further includes a cache controller to: create the block select metadata and insert to the cache line; and create the subrow select metadata and insert to the cache line. 21. The apparatus of claim 20, wherein the cache controller is located on the memory device. 22. The apparatus of claim 20, wherein the cache controller is located off-package on the host and coupled to the memory device. 23. The apparatus of claim 19, wherein the memory controller is configured to: operate on the cache line to control alignment of cache blocks in the array according to the block select metadata; and operate on the cache line to allow the cache line to be placed on multiple different rows to the array according to the row select metadata. 24. The apparatus of claim 19, wherein the block select metadata and the row select metadata are stored internal to the memory device and are transparent to an address space of a processing resource of the host. 25. The apparatus of claim 19, wherein each cache line is handled as multiple vectors of different bit length values. 26. The apparatus of claim 25, wherein each of the different bit length values of the multiple vectors is further subdived into multiple elements. 27. The apparatus of claim 26, wherein the memory controller is configured to use the subrow select metadata to select a subarray to place an element of the multiple elements of a vector. 28. The apparatus of claim 25, wherein the memory controller is configured to handle each vector of the multiple vectors as a chunk. 29. The apparatus of claim 19, wherein the memory controller is configured to: store the cache block in the array; and retrieve a cache line to perform logical operations with the plurality of compute components. 30. The apparatus of claim 19, wherein the array of memory cells are dynamic random access memory (DRAM) cells. 31. The apparatus of claim 30, wherein the memory controller is configured to use a DRAM protocol and DRAM logical and electrical interfaces to receive the cache line and to retrieve the cache line to perform logical operations with the plurality of compute components. 32. An apparatus, comprising: a processing resource; a memory having: a memory array coupled to the processing resource; a sensing circuitry comprising a plurality of sense amplifiers and a plurality of compute components to perform logical operations; and wherein at least one of the processing resource or the memory has a memory controller associated therewith, wherein, the memory controller is configured to: create a block select as metadata to a cache line to control alignment of cache blocks within the memory array; and create a subrow select as metadata to the cache line to control placement of the cache line on a particular row in the memory array to align the cache line to one or more of the plurality of compute components; and an interface between the processing resource and the memory. 33. The apparatus of claim 32, wherein the processing resource comprises a static random access memory (SRAM) and the memory controller is configured to store the block select and the subrow select in the SRAM. 34. The apparatus of claim 32, wherein the interface comprises through silicon vias (TSVs) coupling the processing resource to the memory. 35. The apparatus of claim 34, wherein the apparatus is a 3D integrated logic/memory. 36. The apparatus of claim 32, wherein the memory is a processing in memory (PIM) device, the memory array is a dynamic random access memory (DRAM) array, and wherein the PIM device includes: the sensing circuitry coupled to the memory array, the sensing circuitry including the plurality of sense amplifiers and the plurality of compute components configured to perform logical operations; and wherein the memory controller is coupled to the memory array and the sensing circuitry, the memory controller configured to: receive the cache line having block select and subrow select metadata from the processing resource; and operate on the block select and subrow select metadata to: control alignment of a cache block in the memory array; and control placement of the cache line on a particular row of the memory array. 37. The apparatus of claim 36, wherein the memory controller is configured to: operate on the cache line to control alignment of the cache block in the memory array according to the block select metadata; and operate on the cache line to allow the cache line to be placed on multiple different rows to the memory array according to the row select metadata. 38. The apparatus of claim 36, wherein each cache line is handled as multiple vectors, the cache block is a vector, the vector is a plurality of defined bit length values, and wherein each of the plurality of defined bit length values is an element usable in a logical operation. 39. The apparatus of claim 38, wherein the memory controller is configured to use the subrow select metadata to select a subarray to store an element of the cache line. 40. The apparatus of claim 38, wherein the memory controller is configured to: store the cache block in the memory array; and retrieve an element of the cache line to perform logical operations with the plurality of compute components. 41. The apparatus of claim 36, wherein the memory controller is configured to control the sensing circuitry to perform logical operations including AND, OR, NOT, XOR, NAND, and NOR logical operations. 42. A method for operating a cache memory, comprising: creating a block select as metadata to a cache line to control alignment of cache blocks in a memory array of the cache memory comprising sensing circuitry including a plurality of sense amplifiers and a plurality of compute components to perform logical operations; and creating a subrow select as metadata to the cache line to control placement of at least a portion of the cache line on a particular row in the memory array to align the portion of the cache line to one or more of the plurality of compute components. 43. The method of claim 42, wherein the method comprises: receiving the block select and the subrow select to the memory controller in a processing in memory (PIM) device, the memory array is a dynamic random access memory (DRAM) array, wherein the PIM device comprises: the sensing circuitry coupled to the memory array, the sensing circuitry including the plurality of sense amplifiers and the plurality of compute components configured to implement logical operations; storing the cache block in the memory array; and retrieving at least a portion of the cache line to perform logical operations with the plurality of compute components. 44. The method of claim 43, wherein storing the cache block in the memory array comprises storing a portion of a cache block having a bit length that is less than a bit length of the cache line. 1. An apparatus, comprising: a memory configured to store cache data and having: a memory array; a plurality of compute components to perform logical operations; and a memory controller coupled to the memory array, the memory controller configured to: create a first select as metadata to a cache line to control alignment of the cache line to one or more of the plurality of compute components. 2. The apparatus of claim 1, wherein memory controller is further configured to: create a second select as metadata to the cache line to control placement of the cache line on a particular row in the memory array to align the cache line to one or more of the plurality of compute components. 3. The apparatus of claim 1, wherein the memory is a last layer cache (LLC) memory. 4. The apparatus of claim 3, wherein the plurality of compute components are configured to access and to operate on cached data in the LLC memory without moving the cached data to a higher level in the memory. 5. The apparatus of claim 2, wherein: the first select enables an offset to the cache line; and the second select enables multiple sets to a set associative cache. 6. The apparatus of claim 2, wherein: the second select is four (4) bits to allow a selection of 1 of 16 rows of the memory array for a given cache block; and the second select is added to a portion of a tag in the cache line. 7. The apparatus of claim 2, wherein the memory controller is configured to: use the first select to choose which part of a row in the memory array having a first bit width to access as a chunk having a second bit width as part of an array access request; use the first select and the second select in a repeated manner to allow a cache line to be split and placed differently in a bank of dynamic random access memory (DRAM). 8. The apparatus of claim 7, wherein: the memory array is in a processing in memory (PIM) based device; at least a portion of the cache line is handled as a vector having four (4) third bit length bit values; and a plurality of different cache blocks are placed in a plurality of different banks in a dynamic random access memory (DRAM) to allow multiple mapped single cache lines. 9. The apparatus of claim 8, wherein the memory controller is configured to manage a first bit length cache line on a second bit length interface such that four (4) second bit length chunks can each have a different alignment. 10. An apparatus, comprising: an array of memory cells; a plurality of compute components configured to perform logical operations; and a memory controller coupled to the array and the plurality of compute components, the memory controller configured to: receive a cache line having a first select metadata and a second select metadata; and operate on the first select metadata and the second select metadata to control alignment of the cache line to one or more of the plurality of compute components for processing. 11. The apparatus of claim 10, wherein the apparatus further includes a cache controller to: create the first select metadata and insert to the cache line; and create the second select metadata and insert to the cache line. 12. The apparatus of claim 10, wherein the memory controller is configured to: operate on the cache line to control alignment of cache blocks in the array according to the first select metadata; and operate on the cache line to allow the cache line to be placed on multiple different rows to the array according to the second select metadata. 13. The apparatus of claim 10, wherein the apparatus comprises a memory device configured to couple to a host, and wherein the first select metadata and the second select metadata are stored internal to the memory device and are transparent to an address space of a processing resource of the host. 14. The apparatus of claim 10, wherein: each cache line is handled as multiple vectors of different bit length values; each of the different bit length values of the multiple vectors is further subdived into multiple elements; and wherein the memory controller is configured to use the second select metadata to select a subarray to place an element of the multiple elements of a vector. 15. The apparatus of claim 10, wherein the memory controller is configured to: store the cache block in the array; and retrieve a cache line to perform logical operations with the plurality of compute components. 16. The apparatus of claim 10, wherein the array of memory cells are dynamic random access memory (DRAM) cells. 17. The apparatus of claim 16, wherein the memory controller is configured to use a DRAM protocol and DRAM logical and electrical interfaces to receive the cache line and to retrieve the cache line to perform logical operations with the plurality of compute components. 18. A method for operating a cache memory, comprising: creating a first select as metadata to a cache line to control alignment of cache blocks in a memory array of the cache memory comprising a plurality of compute components to perform logical operations; and creating a second select as metadata to the cache line to control placement of at least a portion of the cache line on a particular row in the memory array to align the portion of the cache line to one or more of the plurality of compute components. 19. The method of claim 18, wherein the method comprises: receiving the first select and the second select to the memory controller in a processing in memory (PIM) device, the memory array is a dynamic random access memory (DRAM) array, wherein the PIM device comprises: the plurality of compute components configured to implement logical operations; storing the cache block in the memory array; and retrieving at least a portion of the cache line to perform logical operations with the plurality of compute components. 20. The method of claim 19, wherein storing the cache block in the memory array comprises storing a portion of a cache block having a bit length that is less than a bit length of the cache line. Allowable Subject Matter Claims 1-20 are presently rejected under obviousness double patenting, but would be allowable provided that a terminal disclaimer is filed. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to THA-O H BUI whose telephone number is (571)270-7357. The examiner can normally be reached M-F 7:00AM - 3:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ALEXANDER SOFOCLEOUS can be reached at 571-272-0635. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /THA-O H BUI/Primary Examiner, Art Unit 2825 01/06/2026
Read full office action

Prosecution Timeline

Jun 28, 2024
Application Filed
Jan 06, 2026
Non-Final Rejection — §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597476
PROGRAM VERIFY COMPENSATION IN A MEMORY DEVICE WITH A DEFECTIVE DECK
2y 5m to grant Granted Apr 07, 2026
Patent 12580034
MEMORY DEVICE AND METHOD OF FABRICATING MEMORY DEVICE INCLUDING A TEST CIRCUIT
2y 5m to grant Granted Mar 17, 2026
Patent 12573457
NON-VOLATILE MEMORY DEVICE AND OPERATING METHOD THEREOF INCLUDING A NEGATIVE DISCHARGE VOLTAGE
2y 5m to grant Granted Mar 10, 2026
Patent 12567468
PASS VOLTAGE ADJUSTMENT FOR PROGRAM OPERATION IN A MEMORY DEVICE WITH A DEFECTIVE DECK
2y 5m to grant Granted Mar 03, 2026
Patent 12555635
MEMORY DEVICE HAVING CACHE STORAGE UNIT FOR STORAGE OF CURRENT AND NEXT DATA PAGES AND PROGRAM OPERATION THEREOF
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
92%
With Interview (+4.3%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 965 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month